00:00:00.001 Started by upstream project "autotest-per-patch" build number 132323 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.145 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.146 The recommended git tool is: git 00:00:00.147 using credential 00000000-0000-0000-0000-000000000002 00:00:00.150 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.180 Fetching changes from the remote Git repository 00:00:00.182 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.209 Using shallow fetch with depth 1 00:00:00.209 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.209 > git --version # timeout=10 00:00:00.243 > git --version # 'git version 2.39.2' 00:00:00.243 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.271 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.271 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.766 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.776 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.787 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.787 > git config core.sparsecheckout # timeout=10 00:00:06.797 > git read-tree -mu HEAD # timeout=10 00:00:06.811 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.830 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.830 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.909 [Pipeline] Start of Pipeline 00:00:06.921 [Pipeline] library 00:00:06.923 Loading library shm_lib@master 00:00:06.923 Library shm_lib@master is cached. Copying from home. 00:00:06.938 [Pipeline] node 00:00:06.950 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.952 [Pipeline] { 00:00:06.962 [Pipeline] catchError 00:00:06.964 [Pipeline] { 00:00:06.976 [Pipeline] wrap 00:00:06.983 [Pipeline] { 00:00:06.989 [Pipeline] stage 00:00:06.991 [Pipeline] { (Prologue) 00:00:07.246 [Pipeline] sh 00:00:07.526 + logger -p user.info -t JENKINS-CI 00:00:07.545 [Pipeline] echo 00:00:07.547 Node: WFP8 00:00:07.553 [Pipeline] sh 00:00:07.851 [Pipeline] setCustomBuildProperty 00:00:07.866 [Pipeline] echo 00:00:07.868 Cleanup processes 00:00:07.872 [Pipeline] sh 00:00:08.154 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.154 1984742 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.166 [Pipeline] sh 00:00:08.451 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.451 ++ grep -v 'sudo pgrep' 00:00:08.451 ++ awk '{print $1}' 00:00:08.451 + sudo kill -9 00:00:08.451 + true 00:00:08.465 [Pipeline] cleanWs 00:00:08.474 [WS-CLEANUP] Deleting project workspace... 00:00:08.474 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.482 [WS-CLEANUP] done 00:00:08.485 [Pipeline] setCustomBuildProperty 00:00:08.497 [Pipeline] sh 00:00:08.777 + sudo git config --global --replace-all safe.directory '*' 00:00:08.880 [Pipeline] httpRequest 00:00:09.345 [Pipeline] echo 00:00:09.347 Sorcerer 10.211.164.20 is alive 00:00:09.358 [Pipeline] retry 00:00:09.360 [Pipeline] { 00:00:09.374 [Pipeline] httpRequest 00:00:09.378 HttpMethod: GET 00:00:09.379 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.379 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.410 Response Code: HTTP/1.1 200 OK 00:00:09.411 Success: Status code 200 is in the accepted range: 200,404 00:00:09.411 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:40.357 [Pipeline] } 00:00:40.374 [Pipeline] // retry 00:00:40.382 [Pipeline] sh 00:00:40.670 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:40.687 [Pipeline] httpRequest 00:00:41.071 [Pipeline] echo 00:00:41.073 Sorcerer 10.211.164.20 is alive 00:00:41.083 [Pipeline] retry 00:00:41.085 [Pipeline] { 00:00:41.099 [Pipeline] httpRequest 00:00:41.103 HttpMethod: GET 00:00:41.104 URL: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:41.104 Sending request to url: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:41.121 Response Code: HTTP/1.1 200 OK 00:00:41.122 Success: Status code 200 is in the accepted range: 200,404 00:00:41.122 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:01:31.398 [Pipeline] } 00:01:31.416 [Pipeline] // retry 00:01:31.426 [Pipeline] sh 00:01:31.711 + tar --no-same-owner -xf spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:01:35.012 [Pipeline] sh 00:01:35.300 + git -C spdk log --oneline -n5 00:01:35.300 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:01:35.300 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:01:35.300 029355612 bdev_ut: add manual examine bdev unit test case 00:01:35.300 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:01:35.300 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:01:35.311 [Pipeline] } 00:01:35.324 [Pipeline] // stage 00:01:35.332 [Pipeline] stage 00:01:35.334 [Pipeline] { (Prepare) 00:01:35.356 [Pipeline] writeFile 00:01:35.372 [Pipeline] sh 00:01:35.653 + logger -p user.info -t JENKINS-CI 00:01:35.663 [Pipeline] sh 00:01:35.944 + logger -p user.info -t JENKINS-CI 00:01:35.955 [Pipeline] sh 00:01:36.239 + cat autorun-spdk.conf 00:01:36.239 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.239 SPDK_TEST_NVMF=1 00:01:36.239 SPDK_TEST_NVME_CLI=1 00:01:36.239 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.239 SPDK_TEST_NVMF_NICS=e810 00:01:36.239 SPDK_TEST_VFIOUSER=1 00:01:36.239 SPDK_RUN_UBSAN=1 00:01:36.239 NET_TYPE=phy 00:01:36.247 RUN_NIGHTLY=0 00:01:36.252 [Pipeline] readFile 00:01:36.276 [Pipeline] withEnv 00:01:36.278 [Pipeline] { 00:01:36.292 [Pipeline] sh 00:01:36.576 + set -ex 00:01:36.576 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:36.576 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:36.576 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.576 ++ SPDK_TEST_NVMF=1 00:01:36.576 ++ SPDK_TEST_NVME_CLI=1 00:01:36.576 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.576 ++ SPDK_TEST_NVMF_NICS=e810 00:01:36.576 ++ SPDK_TEST_VFIOUSER=1 00:01:36.576 ++ SPDK_RUN_UBSAN=1 00:01:36.576 ++ NET_TYPE=phy 00:01:36.576 ++ RUN_NIGHTLY=0 00:01:36.576 + case $SPDK_TEST_NVMF_NICS in 00:01:36.576 + DRIVERS=ice 00:01:36.576 + [[ tcp == \r\d\m\a ]] 00:01:36.576 + [[ -n ice ]] 00:01:36.576 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:36.576 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:36.576 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:36.576 rmmod: ERROR: Module irdma is not currently loaded 00:01:36.576 rmmod: ERROR: Module i40iw is not currently loaded 00:01:36.576 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:36.576 + true 00:01:36.576 + for D in $DRIVERS 00:01:36.576 + sudo modprobe ice 00:01:36.576 + exit 0 00:01:36.584 [Pipeline] } 00:01:36.598 [Pipeline] // withEnv 00:01:36.603 [Pipeline] } 00:01:36.617 [Pipeline] // stage 00:01:36.626 [Pipeline] catchError 00:01:36.628 [Pipeline] { 00:01:36.641 [Pipeline] timeout 00:01:36.641 Timeout set to expire in 1 hr 0 min 00:01:36.643 [Pipeline] { 00:01:36.656 [Pipeline] stage 00:01:36.659 [Pipeline] { (Tests) 00:01:36.672 [Pipeline] sh 00:01:36.955 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.955 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.955 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.956 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:36.956 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.956 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:36.956 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:36.956 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:36.956 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:36.956 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:36.956 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:36.956 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.956 + source /etc/os-release 00:01:36.956 ++ NAME='Fedora Linux' 00:01:36.956 ++ VERSION='39 (Cloud Edition)' 00:01:36.956 ++ ID=fedora 00:01:36.956 ++ VERSION_ID=39 00:01:36.956 ++ VERSION_CODENAME= 00:01:36.956 ++ PLATFORM_ID=platform:f39 00:01:36.956 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:36.956 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:36.956 ++ LOGO=fedora-logo-icon 00:01:36.956 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:36.956 ++ HOME_URL=https://fedoraproject.org/ 00:01:36.956 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:36.956 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:36.956 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:36.956 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:36.956 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:36.956 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:36.956 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:36.956 ++ SUPPORT_END=2024-11-12 00:01:36.956 ++ VARIANT='Cloud Edition' 00:01:36.956 ++ VARIANT_ID=cloud 00:01:36.956 + uname -a 00:01:36.956 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:36.956 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:39.491 Hugepages 00:01:39.491 node hugesize free / total 00:01:39.491 node0 1048576kB 0 / 0 00:01:39.491 node0 2048kB 0 / 0 00:01:39.491 node1 1048576kB 0 / 0 00:01:39.491 node1 2048kB 0 / 0 00:01:39.491 00:01:39.491 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:39.491 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:39.491 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:39.491 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:39.491 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:39.491 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:39.491 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:39.491 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:39.491 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:39.491 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:39.491 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:39.491 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:39.491 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:39.491 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:39.491 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:39.491 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:39.491 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:39.491 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:39.491 + rm -f /tmp/spdk-ld-path 00:01:39.491 + source autorun-spdk.conf 00:01:39.491 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.491 ++ SPDK_TEST_NVMF=1 00:01:39.491 ++ SPDK_TEST_NVME_CLI=1 00:01:39.491 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.491 ++ SPDK_TEST_NVMF_NICS=e810 00:01:39.491 ++ SPDK_TEST_VFIOUSER=1 00:01:39.491 ++ SPDK_RUN_UBSAN=1 00:01:39.491 ++ NET_TYPE=phy 00:01:39.491 ++ RUN_NIGHTLY=0 00:01:39.491 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:39.491 + [[ -n '' ]] 00:01:39.491 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.491 + for M in /var/spdk/build-*-manifest.txt 00:01:39.491 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:39.491 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:39.491 + for M in /var/spdk/build-*-manifest.txt 00:01:39.491 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:39.491 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:39.491 + for M in /var/spdk/build-*-manifest.txt 00:01:39.491 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:39.491 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:39.491 ++ uname 00:01:39.491 + [[ Linux == \L\i\n\u\x ]] 00:01:39.491 + sudo dmesg -T 00:01:39.752 + sudo dmesg --clear 00:01:39.752 + dmesg_pid=1986199 00:01:39.752 + [[ Fedora Linux == FreeBSD ]] 00:01:39.752 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.752 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.752 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:39.752 + [[ -x /usr/src/fio-static/fio ]] 00:01:39.752 + export FIO_BIN=/usr/src/fio-static/fio 00:01:39.752 + FIO_BIN=/usr/src/fio-static/fio 00:01:39.752 + sudo dmesg -Tw 00:01:39.752 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:39.752 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:39.752 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:39.752 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.752 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.752 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:39.752 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.752 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.752 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:39.752 11:12:53 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:39.752 11:12:53 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:39.752 11:12:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.752 11:12:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:39.752 11:12:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:39.752 11:12:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.752 11:12:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:39.752 11:12:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:39.752 11:12:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:39.752 11:12:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:39.752 11:12:53 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:39.752 11:12:53 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:39.752 11:12:53 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:39.752 11:12:53 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:39.752 11:12:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:39.752 11:12:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:39.752 11:12:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:39.752 11:12:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.752 11:12:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.752 11:12:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.752 11:12:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.752 11:12:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.752 11:12:53 -- paths/export.sh@5 -- $ export PATH 00:01:39.752 11:12:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.752 11:12:53 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:39.752 11:12:53 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:39.752 11:12:53 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732011173.XXXXXX 00:01:39.752 11:12:53 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732011173.uo57t5 00:01:39.752 11:12:53 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:39.752 11:12:53 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:39.752 11:12:53 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:39.752 11:12:53 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:39.752 11:12:53 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:39.752 11:12:53 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:39.752 11:12:53 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:39.752 11:12:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.752 11:12:53 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:39.752 11:12:53 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:39.753 11:12:53 -- pm/common@17 -- $ local monitor 00:01:39.753 11:12:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.753 11:12:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.753 11:12:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.753 11:12:53 -- pm/common@21 -- $ date +%s 00:01:39.753 11:12:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.753 11:12:53 -- pm/common@21 -- $ date +%s 00:01:39.753 11:12:53 -- pm/common@25 -- $ sleep 1 00:01:39.753 11:12:53 -- pm/common@21 -- $ date +%s 00:01:39.753 11:12:53 -- pm/common@21 -- $ date +%s 00:01:39.753 11:12:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732011173 00:01:39.753 11:12:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732011173 00:01:39.753 11:12:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732011173 00:01:39.753 11:12:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732011173 00:01:40.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732011173_collect-cpu-load.pm.log 00:01:40.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732011173_collect-vmstat.pm.log 00:01:40.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732011173_collect-cpu-temp.pm.log 00:01:40.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732011173_collect-bmc-pm.bmc.pm.log 00:01:40.976 11:12:54 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:40.976 11:12:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:40.976 11:12:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:40.976 11:12:54 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.976 11:12:54 -- spdk/autobuild.sh@16 -- $ date -u 00:01:40.976 Tue Nov 19 10:12:54 AM UTC 2024 00:01:40.976 11:12:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:40.976 v25.01-pre-197-gdcc2ca8f3 00:01:40.976 11:12:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:40.976 11:12:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:40.976 11:12:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:40.976 11:12:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.976 11:12:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.976 11:12:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.976 ************************************ 00:01:40.976 START TEST ubsan 00:01:40.976 ************************************ 00:01:40.976 11:12:54 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:40.976 using ubsan 00:01:40.976 00:01:40.976 real 0m0.000s 00:01:40.976 user 0m0.000s 00:01:40.976 sys 0m0.000s 00:01:40.976 11:12:54 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.976 11:12:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.976 ************************************ 00:01:40.976 END TEST ubsan 00:01:40.976 ************************************ 00:01:40.977 11:12:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:40.977 11:12:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:40.977 11:12:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:40.977 11:12:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:40.977 11:12:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:40.977 11:12:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:40.977 11:12:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:40.977 11:12:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:40.977 11:12:54 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:41.235 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:41.235 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:41.493 Using 'verbs' RDMA provider 00:01:54.274 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:06.484 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:06.742 Creating mk/config.mk...done. 00:02:06.742 Creating mk/cc.flags.mk...done. 00:02:06.742 Type 'make' to build. 00:02:06.742 11:13:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:06.742 11:13:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:06.742 11:13:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:06.742 11:13:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.742 ************************************ 00:02:06.742 START TEST make 00:02:06.742 ************************************ 00:02:06.742 11:13:20 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:07.000 make[1]: Nothing to be done for 'all'. 00:02:08.381 The Meson build system 00:02:08.381 Version: 1.5.0 00:02:08.381 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:08.381 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:08.381 Build type: native build 00:02:08.381 Project name: libvfio-user 00:02:08.381 Project version: 0.0.1 00:02:08.381 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.381 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.381 Host machine cpu family: x86_64 00:02:08.381 Host machine cpu: x86_64 00:02:08.381 Run-time dependency threads found: YES 00:02:08.381 Library dl found: YES 00:02:08.381 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.381 Run-time dependency json-c found: YES 0.17 00:02:08.381 Run-time dependency cmocka found: YES 1.1.7 00:02:08.381 Program pytest-3 found: NO 00:02:08.381 Program flake8 found: NO 00:02:08.381 Program misspell-fixer found: NO 00:02:08.381 Program restructuredtext-lint found: NO 00:02:08.381 Program valgrind found: YES (/usr/bin/valgrind) 00:02:08.381 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.381 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.381 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.381 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:08.381 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:08.381 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:08.381 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:08.381 Build targets in project: 8 00:02:08.381 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:08.381 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:08.381 00:02:08.381 libvfio-user 0.0.1 00:02:08.381 00:02:08.381 User defined options 00:02:08.381 buildtype : debug 00:02:08.381 default_library: shared 00:02:08.381 libdir : /usr/local/lib 00:02:08.381 00:02:08.381 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.948 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:08.948 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:08.948 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:08.948 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:08.948 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:08.948 [5/37] Compiling C object samples/null.p/null.c.o 00:02:08.948 [6/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:08.948 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:08.948 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:08.948 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:08.948 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:08.948 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:08.948 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:08.948 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:08.948 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:08.948 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:08.948 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:08.948 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:08.948 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:08.948 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:08.948 [20/37] Compiling C object samples/server.p/server.c.o 00:02:08.948 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:08.948 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:08.948 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:08.948 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:08.948 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:09.207 [26/37] Compiling C object samples/client.p/client.c.o 00:02:09.207 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:09.207 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:09.207 [29/37] Linking target samples/client 00:02:09.207 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:09.207 [31/37] Linking target test/unit_tests 00:02:09.207 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:09.207 [33/37] Linking target samples/lspci 00:02:09.207 [34/37] Linking target samples/server 00:02:09.207 [35/37] Linking target samples/null 00:02:09.207 [36/37] Linking target samples/gpio-pci-idio-16 00:02:09.207 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:09.466 INFO: autodetecting backend as ninja 00:02:09.466 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.466 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.725 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:09.725 ninja: no work to do. 00:02:15.002 The Meson build system 00:02:15.002 Version: 1.5.0 00:02:15.002 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:15.002 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:15.002 Build type: native build 00:02:15.002 Program cat found: YES (/usr/bin/cat) 00:02:15.002 Project name: DPDK 00:02:15.002 Project version: 24.03.0 00:02:15.002 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:15.002 C linker for the host machine: cc ld.bfd 2.40-14 00:02:15.002 Host machine cpu family: x86_64 00:02:15.002 Host machine cpu: x86_64 00:02:15.002 Message: ## Building in Developer Mode ## 00:02:15.002 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.002 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:15.002 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.002 Program python3 found: YES (/usr/bin/python3) 00:02:15.002 Program cat found: YES (/usr/bin/cat) 00:02:15.002 Compiler for C supports arguments -march=native: YES 00:02:15.002 Checking for size of "void *" : 8 00:02:15.002 Checking for size of "void *" : 8 (cached) 00:02:15.002 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:15.002 Library m found: YES 00:02:15.002 Library numa found: YES 00:02:15.002 Has header "numaif.h" : YES 00:02:15.002 Library fdt found: NO 00:02:15.002 Library execinfo found: NO 00:02:15.002 Has header "execinfo.h" : YES 00:02:15.002 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:15.002 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.002 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.002 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.002 Run-time dependency openssl found: YES 3.1.1 00:02:15.002 Run-time dependency libpcap found: YES 1.10.4 00:02:15.002 Has header "pcap.h" with dependency libpcap: YES 00:02:15.002 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.002 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.002 Compiler for C supports arguments -Wformat: YES 00:02:15.002 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.002 Compiler for C supports arguments -Wformat-security: NO 00:02:15.002 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.002 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.002 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.002 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.002 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.002 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.002 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.002 Compiler for C supports arguments -Wundef: YES 00:02:15.002 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.002 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.002 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.002 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.002 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.002 Program objdump found: YES (/usr/bin/objdump) 00:02:15.002 Compiler for C supports arguments -mavx512f: YES 00:02:15.002 Checking if "AVX512 checking" compiles: YES 00:02:15.002 Fetching value of define "__SSE4_2__" : 1 00:02:15.002 Fetching value of define "__AES__" : 1 00:02:15.002 Fetching value of define "__AVX__" : 1 00:02:15.002 Fetching value of define "__AVX2__" : 1 00:02:15.002 Fetching value of define "__AVX512BW__" : 1 00:02:15.002 Fetching value of define "__AVX512CD__" : 1 00:02:15.002 Fetching value of define "__AVX512DQ__" : 1 00:02:15.002 Fetching value of define "__AVX512F__" : 1 00:02:15.002 Fetching value of define "__AVX512VL__" : 1 00:02:15.002 Fetching value of define "__PCLMUL__" : 1 00:02:15.002 Fetching value of define "__RDRND__" : 1 00:02:15.002 Fetching value of define "__RDSEED__" : 1 00:02:15.002 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:15.002 Fetching value of define "__znver1__" : (undefined) 00:02:15.002 Fetching value of define "__znver2__" : (undefined) 00:02:15.002 Fetching value of define "__znver3__" : (undefined) 00:02:15.002 Fetching value of define "__znver4__" : (undefined) 00:02:15.002 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.002 Message: lib/log: Defining dependency "log" 00:02:15.002 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.002 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.002 Checking for function "getentropy" : NO 00:02:15.002 Message: lib/eal: Defining dependency "eal" 00:02:15.002 Message: lib/ring: Defining dependency "ring" 00:02:15.002 Message: lib/rcu: Defining dependency "rcu" 00:02:15.002 Message: lib/mempool: Defining dependency "mempool" 00:02:15.002 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.002 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.002 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.002 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:15.002 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:15.002 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:15.002 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:15.002 Compiler for C supports arguments -mpclmul: YES 00:02:15.002 Compiler for C supports arguments -maes: YES 00:02:15.002 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.002 Compiler for C supports arguments -mavx512bw: YES 00:02:15.002 Compiler for C supports arguments -mavx512dq: YES 00:02:15.002 Compiler for C supports arguments -mavx512vl: YES 00:02:15.002 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.002 Compiler for C supports arguments -mavx2: YES 00:02:15.002 Compiler for C supports arguments -mavx: YES 00:02:15.002 Message: lib/net: Defining dependency "net" 00:02:15.002 Message: lib/meter: Defining dependency "meter" 00:02:15.002 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.002 Message: lib/pci: Defining dependency "pci" 00:02:15.002 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.002 Message: lib/hash: Defining dependency "hash" 00:02:15.002 Message: lib/timer: Defining dependency "timer" 00:02:15.002 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.002 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.002 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.002 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.002 Message: lib/power: Defining dependency "power" 00:02:15.002 Message: lib/reorder: Defining dependency "reorder" 00:02:15.002 Message: lib/security: Defining dependency "security" 00:02:15.002 Has header "linux/userfaultfd.h" : YES 00:02:15.002 Has header "linux/vduse.h" : YES 00:02:15.002 Message: lib/vhost: Defining dependency "vhost" 00:02:15.002 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.002 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.002 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.002 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.002 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:15.002 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:15.002 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:15.002 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:15.002 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:15.002 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:15.002 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:15.002 Configuring doxy-api-html.conf using configuration 00:02:15.002 Configuring doxy-api-man.conf using configuration 00:02:15.002 Program mandb found: YES (/usr/bin/mandb) 00:02:15.002 Program sphinx-build found: NO 00:02:15.002 Configuring rte_build_config.h using configuration 00:02:15.002 Message: 00:02:15.002 ================= 00:02:15.002 Applications Enabled 00:02:15.002 ================= 00:02:15.002 00:02:15.002 apps: 00:02:15.002 00:02:15.002 00:02:15.002 Message: 00:02:15.002 ================= 00:02:15.002 Libraries Enabled 00:02:15.002 ================= 00:02:15.002 00:02:15.002 libs: 00:02:15.002 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:15.002 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:15.002 cryptodev, dmadev, power, reorder, security, vhost, 00:02:15.002 00:02:15.002 Message: 00:02:15.002 =============== 00:02:15.002 Drivers Enabled 00:02:15.002 =============== 00:02:15.002 00:02:15.002 common: 00:02:15.002 00:02:15.002 bus: 00:02:15.002 pci, vdev, 00:02:15.002 mempool: 00:02:15.002 ring, 00:02:15.002 dma: 00:02:15.002 00:02:15.002 net: 00:02:15.002 00:02:15.002 crypto: 00:02:15.002 00:02:15.002 compress: 00:02:15.002 00:02:15.002 vdpa: 00:02:15.002 00:02:15.002 00:02:15.002 Message: 00:02:15.002 ================= 00:02:15.002 Content Skipped 00:02:15.002 ================= 00:02:15.002 00:02:15.002 apps: 00:02:15.002 dumpcap: explicitly disabled via build config 00:02:15.002 graph: explicitly disabled via build config 00:02:15.002 pdump: explicitly disabled via build config 00:02:15.002 proc-info: explicitly disabled via build config 00:02:15.002 test-acl: explicitly disabled via build config 00:02:15.002 test-bbdev: explicitly disabled via build config 00:02:15.002 test-cmdline: explicitly disabled via build config 00:02:15.002 test-compress-perf: explicitly disabled via build config 00:02:15.002 test-crypto-perf: explicitly disabled via build config 00:02:15.002 test-dma-perf: explicitly disabled via build config 00:02:15.002 test-eventdev: explicitly disabled via build config 00:02:15.002 test-fib: explicitly disabled via build config 00:02:15.002 test-flow-perf: explicitly disabled via build config 00:02:15.003 test-gpudev: explicitly disabled via build config 00:02:15.003 test-mldev: explicitly disabled via build config 00:02:15.003 test-pipeline: explicitly disabled via build config 00:02:15.003 test-pmd: explicitly disabled via build config 00:02:15.003 test-regex: explicitly disabled via build config 00:02:15.003 test-sad: explicitly disabled via build config 00:02:15.003 test-security-perf: explicitly disabled via build config 00:02:15.003 00:02:15.003 libs: 00:02:15.003 argparse: explicitly disabled via build config 00:02:15.003 metrics: explicitly disabled via build config 00:02:15.003 acl: explicitly disabled via build config 00:02:15.003 bbdev: explicitly disabled via build config 00:02:15.003 bitratestats: explicitly disabled via build config 00:02:15.003 bpf: explicitly disabled via build config 00:02:15.003 cfgfile: explicitly disabled via build config 00:02:15.003 distributor: explicitly disabled via build config 00:02:15.003 efd: explicitly disabled via build config 00:02:15.003 eventdev: explicitly disabled via build config 00:02:15.003 dispatcher: explicitly disabled via build config 00:02:15.003 gpudev: explicitly disabled via build config 00:02:15.003 gro: explicitly disabled via build config 00:02:15.003 gso: explicitly disabled via build config 00:02:15.003 ip_frag: explicitly disabled via build config 00:02:15.003 jobstats: explicitly disabled via build config 00:02:15.003 latencystats: explicitly disabled via build config 00:02:15.003 lpm: explicitly disabled via build config 00:02:15.003 member: explicitly disabled via build config 00:02:15.003 pcapng: explicitly disabled via build config 00:02:15.003 rawdev: explicitly disabled via build config 00:02:15.003 regexdev: explicitly disabled via build config 00:02:15.003 mldev: explicitly disabled via build config 00:02:15.003 rib: explicitly disabled via build config 00:02:15.003 sched: explicitly disabled via build config 00:02:15.003 stack: explicitly disabled via build config 00:02:15.003 ipsec: explicitly disabled via build config 00:02:15.003 pdcp: explicitly disabled via build config 00:02:15.003 fib: explicitly disabled via build config 00:02:15.003 port: explicitly disabled via build config 00:02:15.003 pdump: explicitly disabled via build config 00:02:15.003 table: explicitly disabled via build config 00:02:15.003 pipeline: explicitly disabled via build config 00:02:15.003 graph: explicitly disabled via build config 00:02:15.003 node: explicitly disabled via build config 00:02:15.003 00:02:15.003 drivers: 00:02:15.003 common/cpt: not in enabled drivers build config 00:02:15.003 common/dpaax: not in enabled drivers build config 00:02:15.003 common/iavf: not in enabled drivers build config 00:02:15.003 common/idpf: not in enabled drivers build config 00:02:15.003 common/ionic: not in enabled drivers build config 00:02:15.003 common/mvep: not in enabled drivers build config 00:02:15.003 common/octeontx: not in enabled drivers build config 00:02:15.003 bus/auxiliary: not in enabled drivers build config 00:02:15.003 bus/cdx: not in enabled drivers build config 00:02:15.003 bus/dpaa: not in enabled drivers build config 00:02:15.003 bus/fslmc: not in enabled drivers build config 00:02:15.003 bus/ifpga: not in enabled drivers build config 00:02:15.003 bus/platform: not in enabled drivers build config 00:02:15.003 bus/uacce: not in enabled drivers build config 00:02:15.003 bus/vmbus: not in enabled drivers build config 00:02:15.003 common/cnxk: not in enabled drivers build config 00:02:15.003 common/mlx5: not in enabled drivers build config 00:02:15.003 common/nfp: not in enabled drivers build config 00:02:15.003 common/nitrox: not in enabled drivers build config 00:02:15.003 common/qat: not in enabled drivers build config 00:02:15.003 common/sfc_efx: not in enabled drivers build config 00:02:15.003 mempool/bucket: not in enabled drivers build config 00:02:15.003 mempool/cnxk: not in enabled drivers build config 00:02:15.003 mempool/dpaa: not in enabled drivers build config 00:02:15.003 mempool/dpaa2: not in enabled drivers build config 00:02:15.003 mempool/octeontx: not in enabled drivers build config 00:02:15.003 mempool/stack: not in enabled drivers build config 00:02:15.003 dma/cnxk: not in enabled drivers build config 00:02:15.003 dma/dpaa: not in enabled drivers build config 00:02:15.003 dma/dpaa2: not in enabled drivers build config 00:02:15.003 dma/hisilicon: not in enabled drivers build config 00:02:15.003 dma/idxd: not in enabled drivers build config 00:02:15.003 dma/ioat: not in enabled drivers build config 00:02:15.003 dma/skeleton: not in enabled drivers build config 00:02:15.003 net/af_packet: not in enabled drivers build config 00:02:15.003 net/af_xdp: not in enabled drivers build config 00:02:15.003 net/ark: not in enabled drivers build config 00:02:15.003 net/atlantic: not in enabled drivers build config 00:02:15.003 net/avp: not in enabled drivers build config 00:02:15.003 net/axgbe: not in enabled drivers build config 00:02:15.003 net/bnx2x: not in enabled drivers build config 00:02:15.003 net/bnxt: not in enabled drivers build config 00:02:15.003 net/bonding: not in enabled drivers build config 00:02:15.003 net/cnxk: not in enabled drivers build config 00:02:15.003 net/cpfl: not in enabled drivers build config 00:02:15.003 net/cxgbe: not in enabled drivers build config 00:02:15.003 net/dpaa: not in enabled drivers build config 00:02:15.003 net/dpaa2: not in enabled drivers build config 00:02:15.003 net/e1000: not in enabled drivers build config 00:02:15.003 net/ena: not in enabled drivers build config 00:02:15.003 net/enetc: not in enabled drivers build config 00:02:15.003 net/enetfec: not in enabled drivers build config 00:02:15.003 net/enic: not in enabled drivers build config 00:02:15.003 net/failsafe: not in enabled drivers build config 00:02:15.003 net/fm10k: not in enabled drivers build config 00:02:15.003 net/gve: not in enabled drivers build config 00:02:15.003 net/hinic: not in enabled drivers build config 00:02:15.003 net/hns3: not in enabled drivers build config 00:02:15.003 net/i40e: not in enabled drivers build config 00:02:15.003 net/iavf: not in enabled drivers build config 00:02:15.003 net/ice: not in enabled drivers build config 00:02:15.003 net/idpf: not in enabled drivers build config 00:02:15.003 net/igc: not in enabled drivers build config 00:02:15.003 net/ionic: not in enabled drivers build config 00:02:15.003 net/ipn3ke: not in enabled drivers build config 00:02:15.003 net/ixgbe: not in enabled drivers build config 00:02:15.003 net/mana: not in enabled drivers build config 00:02:15.003 net/memif: not in enabled drivers build config 00:02:15.003 net/mlx4: not in enabled drivers build config 00:02:15.003 net/mlx5: not in enabled drivers build config 00:02:15.003 net/mvneta: not in enabled drivers build config 00:02:15.003 net/mvpp2: not in enabled drivers build config 00:02:15.003 net/netvsc: not in enabled drivers build config 00:02:15.003 net/nfb: not in enabled drivers build config 00:02:15.003 net/nfp: not in enabled drivers build config 00:02:15.003 net/ngbe: not in enabled drivers build config 00:02:15.003 net/null: not in enabled drivers build config 00:02:15.003 net/octeontx: not in enabled drivers build config 00:02:15.003 net/octeon_ep: not in enabled drivers build config 00:02:15.003 net/pcap: not in enabled drivers build config 00:02:15.003 net/pfe: not in enabled drivers build config 00:02:15.003 net/qede: not in enabled drivers build config 00:02:15.003 net/ring: not in enabled drivers build config 00:02:15.003 net/sfc: not in enabled drivers build config 00:02:15.003 net/softnic: not in enabled drivers build config 00:02:15.003 net/tap: not in enabled drivers build config 00:02:15.003 net/thunderx: not in enabled drivers build config 00:02:15.003 net/txgbe: not in enabled drivers build config 00:02:15.003 net/vdev_netvsc: not in enabled drivers build config 00:02:15.003 net/vhost: not in enabled drivers build config 00:02:15.003 net/virtio: not in enabled drivers build config 00:02:15.003 net/vmxnet3: not in enabled drivers build config 00:02:15.003 raw/*: missing internal dependency, "rawdev" 00:02:15.003 crypto/armv8: not in enabled drivers build config 00:02:15.003 crypto/bcmfs: not in enabled drivers build config 00:02:15.003 crypto/caam_jr: not in enabled drivers build config 00:02:15.003 crypto/ccp: not in enabled drivers build config 00:02:15.003 crypto/cnxk: not in enabled drivers build config 00:02:15.003 crypto/dpaa_sec: not in enabled drivers build config 00:02:15.003 crypto/dpaa2_sec: not in enabled drivers build config 00:02:15.003 crypto/ipsec_mb: not in enabled drivers build config 00:02:15.003 crypto/mlx5: not in enabled drivers build config 00:02:15.003 crypto/mvsam: not in enabled drivers build config 00:02:15.003 crypto/nitrox: not in enabled drivers build config 00:02:15.003 crypto/null: not in enabled drivers build config 00:02:15.003 crypto/octeontx: not in enabled drivers build config 00:02:15.003 crypto/openssl: not in enabled drivers build config 00:02:15.003 crypto/scheduler: not in enabled drivers build config 00:02:15.003 crypto/uadk: not in enabled drivers build config 00:02:15.003 crypto/virtio: not in enabled drivers build config 00:02:15.003 compress/isal: not in enabled drivers build config 00:02:15.003 compress/mlx5: not in enabled drivers build config 00:02:15.003 compress/nitrox: not in enabled drivers build config 00:02:15.003 compress/octeontx: not in enabled drivers build config 00:02:15.003 compress/zlib: not in enabled drivers build config 00:02:15.003 regex/*: missing internal dependency, "regexdev" 00:02:15.003 ml/*: missing internal dependency, "mldev" 00:02:15.003 vdpa/ifc: not in enabled drivers build config 00:02:15.003 vdpa/mlx5: not in enabled drivers build config 00:02:15.003 vdpa/nfp: not in enabled drivers build config 00:02:15.003 vdpa/sfc: not in enabled drivers build config 00:02:15.003 event/*: missing internal dependency, "eventdev" 00:02:15.003 baseband/*: missing internal dependency, "bbdev" 00:02:15.003 gpu/*: missing internal dependency, "gpudev" 00:02:15.003 00:02:15.003 00:02:15.003 Build targets in project: 85 00:02:15.003 00:02:15.003 DPDK 24.03.0 00:02:15.003 00:02:15.003 User defined options 00:02:15.003 buildtype : debug 00:02:15.003 default_library : shared 00:02:15.003 libdir : lib 00:02:15.003 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:15.003 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:15.003 c_link_args : 00:02:15.003 cpu_instruction_set: native 00:02:15.004 disable_apps : test-cmdline,dumpcap,test-dma-perf,test-bbdev,test,test-flow-perf,test-security-perf,test-compress-perf,test-fib,test-regex,test-acl,test-crypto-perf,test-mldev,proc-info,graph,test-sad,test-pipeline,test-pmd,pdump,test-eventdev,test-gpudev 00:02:15.004 disable_libs : rawdev,pipeline,argparse,node,gpudev,jobstats,port,pcapng,ip_frag,pdcp,table,lpm,efd,gso,stack,eventdev,bpf,dispatcher,mldev,fib,ipsec,acl,graph,metrics,regexdev,distributor,latencystats,bbdev,cfgfile,member,sched,gro,rib,bitratestats,pdump 00:02:15.004 enable_docs : false 00:02:15.004 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:15.004 enable_kmods : false 00:02:15.004 max_lcores : 128 00:02:15.004 tests : false 00:02:15.004 00:02:15.004 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.574 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:15.574 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.574 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.574 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.574 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.574 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.574 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.574 [7/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.574 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.574 [9/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.574 [10/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:15.574 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.839 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.839 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.839 [14/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.839 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.839 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:15.839 [17/268] Linking static target lib/librte_kvargs.a 00:02:15.839 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.839 [19/268] Linking static target lib/librte_log.a 00:02:15.839 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.839 [21/268] Linking static target lib/librte_pci.a 00:02:15.839 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.839 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.098 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.098 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.098 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.098 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:16.098 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.098 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:16.098 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:16.098 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.098 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.098 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.098 [34/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:16.098 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:16.098 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.098 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.098 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.098 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:16.098 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.098 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.098 [42/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.098 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:16.098 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:16.098 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.098 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.098 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.098 [48/268] Linking static target lib/librte_meter.a 00:02:16.098 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.098 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:16.098 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:16.098 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.098 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.098 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.098 [55/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.098 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.098 [57/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.098 [58/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.098 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:16.098 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.098 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.098 [62/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:16.098 [63/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.098 [64/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:16.098 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.098 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:16.098 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.098 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.098 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:16.098 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:16.098 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.358 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.359 [73/268] Linking static target lib/librte_ring.a 00:02:16.359 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:16.359 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.359 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.359 [77/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:16.359 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.359 [79/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:16.359 [80/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:16.359 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:16.359 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:16.359 [83/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:16.359 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.359 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:16.359 [86/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:16.359 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.359 [88/268] Linking static target lib/librte_telemetry.a 00:02:16.359 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.359 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.359 [91/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:16.359 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.359 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.359 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:16.359 [95/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:16.359 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:16.359 [97/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.359 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.359 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.359 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.359 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.359 [102/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:16.359 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:16.359 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:16.359 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.359 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.359 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.359 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:16.359 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:16.359 [110/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.359 [111/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.359 [112/268] Linking static target lib/librte_mempool.a 00:02:16.359 [113/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.359 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.359 [115/268] Linking static target lib/librte_rcu.a 00:02:16.359 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:16.359 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.359 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:16.359 [119/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.359 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:16.359 [121/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.359 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.359 [123/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.359 [124/268] Linking static target lib/librte_eal.a 00:02:16.359 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.359 [126/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:16.359 [127/268] Linking static target lib/librte_net.a 00:02:16.359 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.359 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:16.359 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.359 [131/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.359 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.359 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.359 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.617 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:16.617 [136/268] Linking static target lib/librte_cmdline.a 00:02:16.617 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.617 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.617 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.617 [140/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.617 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:16.617 [142/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:16.617 [143/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:16.617 [144/268] Linking target lib/librte_log.so.24.1 00:02:16.617 [145/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.617 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.617 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.617 [148/268] Linking static target lib/librte_timer.a 00:02:16.617 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.617 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:16.617 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:16.617 [152/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:16.617 [153/268] Linking static target lib/librte_mbuf.a 00:02:16.617 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:16.617 [155/268] Linking static target lib/librte_dmadev.a 00:02:16.617 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:16.617 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.617 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.617 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:16.617 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:16.617 [161/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:16.617 [162/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:16.617 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:16.617 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:16.617 [165/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.617 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.617 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:16.617 [168/268] Linking static target lib/librte_reorder.a 00:02:16.617 [169/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.617 [170/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:16.617 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.875 [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:16.875 [173/268] Linking target lib/librte_kvargs.so.24.1 00:02:16.875 [174/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:16.875 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:16.875 [176/268] Linking static target lib/librte_compressdev.a 00:02:16.875 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.875 [178/268] Linking target lib/librte_telemetry.so.24.1 00:02:16.875 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:16.875 [180/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:16.875 [181/268] Linking static target lib/librte_power.a 00:02:16.875 [182/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:16.875 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:16.875 [184/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:16.875 [185/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:16.875 [186/268] Linking static target lib/librte_security.a 00:02:16.875 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:16.875 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:16.875 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:16.875 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:16.875 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:16.875 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:16.875 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:16.875 [194/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:16.875 [195/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:16.875 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.875 [197/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:16.875 [198/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.875 [199/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.875 [200/268] Linking static target drivers/librte_mempool_ring.a 00:02:16.875 [201/268] Linking static target lib/librte_hash.a 00:02:16.875 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:16.875 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.133 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.133 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.133 [206/268] Linking static target drivers/librte_bus_vdev.a 00:02:17.133 [207/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:17.133 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:17.133 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.133 [210/268] Linking static target lib/librte_cryptodev.a 00:02:17.133 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.133 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.133 [213/268] Linking static target drivers/librte_bus_pci.a 00:02:17.133 [214/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.133 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.392 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.392 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.392 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.392 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.392 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.392 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:17.392 [222/268] Linking static target lib/librte_ethdev.a 00:02:17.651 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:17.651 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.651 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.909 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.909 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.843 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:18.843 [229/268] Linking static target lib/librte_vhost.a 00:02:18.843 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.744 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.019 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.278 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.278 [234/268] Linking target lib/librte_eal.so.24.1 00:02:26.278 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:26.537 [236/268] Linking target lib/librte_ring.so.24.1 00:02:26.537 [237/268] Linking target lib/librte_meter.so.24.1 00:02:26.537 [238/268] Linking target lib/librte_pci.so.24.1 00:02:26.537 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:26.537 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:26.537 [241/268] Linking target lib/librte_timer.so.24.1 00:02:26.537 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:26.537 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:26.537 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:26.537 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:26.537 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:26.537 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:26.537 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:26.537 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:26.796 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:26.796 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:26.796 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:26.796 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:27.055 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:27.055 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:27.055 [256/268] Linking target lib/librte_net.so.24.1 00:02:27.055 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:27.055 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:27.055 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:27.055 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:27.055 [261/268] Linking target lib/librte_hash.so.24.1 00:02:27.055 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:27.055 [263/268] Linking target lib/librte_security.so.24.1 00:02:27.055 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:27.314 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:27.314 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:27.314 [267/268] Linking target lib/librte_power.so.24.1 00:02:27.314 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:27.314 INFO: autodetecting backend as ninja 00:02:27.314 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:39.591 CC lib/ut_mock/mock.o 00:02:39.591 CC lib/ut/ut.o 00:02:39.591 CC lib/log/log.o 00:02:39.591 CC lib/log/log_flags.o 00:02:39.591 CC lib/log/log_deprecated.o 00:02:39.591 LIB libspdk_ut.a 00:02:39.591 LIB libspdk_log.a 00:02:39.591 LIB libspdk_ut_mock.a 00:02:39.591 SO libspdk_ut.so.2.0 00:02:39.591 SO libspdk_log.so.7.1 00:02:39.591 SO libspdk_ut_mock.so.6.0 00:02:39.591 SYMLINK libspdk_ut.so 00:02:39.591 SYMLINK libspdk_ut_mock.so 00:02:39.591 SYMLINK libspdk_log.so 00:02:39.591 CC lib/dma/dma.o 00:02:39.591 CC lib/util/base64.o 00:02:39.591 CC lib/ioat/ioat.o 00:02:39.591 CC lib/util/bit_array.o 00:02:39.591 CC lib/util/cpuset.o 00:02:39.591 CC lib/util/crc16.o 00:02:39.591 CC lib/util/crc32.o 00:02:39.591 CXX lib/trace_parser/trace.o 00:02:39.591 CC lib/util/crc32c.o 00:02:39.591 CC lib/util/crc32_ieee.o 00:02:39.591 CC lib/util/crc64.o 00:02:39.591 CC lib/util/dif.o 00:02:39.591 CC lib/util/fd.o 00:02:39.591 CC lib/util/fd_group.o 00:02:39.591 CC lib/util/file.o 00:02:39.591 CC lib/util/hexlify.o 00:02:39.591 CC lib/util/iov.o 00:02:39.591 CC lib/util/math.o 00:02:39.591 CC lib/util/net.o 00:02:39.591 CC lib/util/pipe.o 00:02:39.591 CC lib/util/strerror_tls.o 00:02:39.591 CC lib/util/string.o 00:02:39.591 CC lib/util/uuid.o 00:02:39.591 CC lib/util/xor.o 00:02:39.591 CC lib/util/zipf.o 00:02:39.591 CC lib/util/md5.o 00:02:39.591 CC lib/vfio_user/host/vfio_user_pci.o 00:02:39.591 CC lib/vfio_user/host/vfio_user.o 00:02:39.591 LIB libspdk_dma.a 00:02:39.591 SO libspdk_dma.so.5.0 00:02:39.591 LIB libspdk_ioat.a 00:02:39.591 SYMLINK libspdk_dma.so 00:02:39.591 SO libspdk_ioat.so.7.0 00:02:39.591 SYMLINK libspdk_ioat.so 00:02:39.591 LIB libspdk_vfio_user.a 00:02:39.591 SO libspdk_vfio_user.so.5.0 00:02:39.591 LIB libspdk_util.a 00:02:39.591 SYMLINK libspdk_vfio_user.so 00:02:39.591 SO libspdk_util.so.10.1 00:02:39.591 SYMLINK libspdk_util.so 00:02:39.591 LIB libspdk_trace_parser.a 00:02:39.591 SO libspdk_trace_parser.so.6.0 00:02:39.591 SYMLINK libspdk_trace_parser.so 00:02:39.850 CC lib/json/json_util.o 00:02:39.850 CC lib/json/json_parse.o 00:02:39.850 CC lib/json/json_write.o 00:02:39.850 CC lib/vmd/vmd.o 00:02:39.850 CC lib/vmd/led.o 00:02:39.850 CC lib/rdma_utils/rdma_utils.o 00:02:39.850 CC lib/env_dpdk/env.o 00:02:39.850 CC lib/env_dpdk/memory.o 00:02:39.850 CC lib/idxd/idxd.o 00:02:39.850 CC lib/conf/conf.o 00:02:39.850 CC lib/idxd/idxd_user.o 00:02:39.850 CC lib/env_dpdk/pci.o 00:02:39.850 CC lib/idxd/idxd_kernel.o 00:02:39.850 CC lib/env_dpdk/init.o 00:02:39.850 CC lib/env_dpdk/threads.o 00:02:39.850 CC lib/env_dpdk/pci_ioat.o 00:02:39.850 CC lib/env_dpdk/pci_virtio.o 00:02:39.850 CC lib/env_dpdk/pci_vmd.o 00:02:39.850 CC lib/env_dpdk/pci_idxd.o 00:02:39.850 CC lib/env_dpdk/pci_event.o 00:02:39.850 CC lib/env_dpdk/sigbus_handler.o 00:02:39.850 CC lib/env_dpdk/pci_dpdk.o 00:02:39.850 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:39.850 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:40.109 LIB libspdk_conf.a 00:02:40.109 LIB libspdk_json.a 00:02:40.109 LIB libspdk_rdma_utils.a 00:02:40.109 SO libspdk_conf.so.6.0 00:02:40.109 SO libspdk_json.so.6.0 00:02:40.109 SO libspdk_rdma_utils.so.1.0 00:02:40.109 SYMLINK libspdk_conf.so 00:02:40.109 SYMLINK libspdk_rdma_utils.so 00:02:40.109 SYMLINK libspdk_json.so 00:02:40.367 LIB libspdk_idxd.a 00:02:40.367 LIB libspdk_vmd.a 00:02:40.367 SO libspdk_idxd.so.12.1 00:02:40.367 SO libspdk_vmd.so.6.0 00:02:40.367 SYMLINK libspdk_idxd.so 00:02:40.367 SYMLINK libspdk_vmd.so 00:02:40.626 CC lib/jsonrpc/jsonrpc_server.o 00:02:40.626 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:40.626 CC lib/jsonrpc/jsonrpc_client.o 00:02:40.626 CC lib/rdma_provider/common.o 00:02:40.626 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:40.626 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:40.626 LIB libspdk_rdma_provider.a 00:02:40.626 LIB libspdk_jsonrpc.a 00:02:40.885 SO libspdk_rdma_provider.so.7.0 00:02:40.885 SO libspdk_jsonrpc.so.6.0 00:02:40.885 SYMLINK libspdk_rdma_provider.so 00:02:40.885 SYMLINK libspdk_jsonrpc.so 00:02:40.885 LIB libspdk_env_dpdk.a 00:02:40.885 SO libspdk_env_dpdk.so.15.1 00:02:41.144 SYMLINK libspdk_env_dpdk.so 00:02:41.144 CC lib/rpc/rpc.o 00:02:41.403 LIB libspdk_rpc.a 00:02:41.403 SO libspdk_rpc.so.6.0 00:02:41.403 SYMLINK libspdk_rpc.so 00:02:41.662 CC lib/trace/trace.o 00:02:41.662 CC lib/trace/trace_flags.o 00:02:41.662 CC lib/trace/trace_rpc.o 00:02:41.662 CC lib/notify/notify.o 00:02:41.662 CC lib/keyring/keyring.o 00:02:41.662 CC lib/notify/notify_rpc.o 00:02:41.662 CC lib/keyring/keyring_rpc.o 00:02:41.922 LIB libspdk_notify.a 00:02:41.922 SO libspdk_notify.so.6.0 00:02:41.922 LIB libspdk_keyring.a 00:02:41.922 LIB libspdk_trace.a 00:02:41.922 SO libspdk_keyring.so.2.0 00:02:41.922 SO libspdk_trace.so.11.0 00:02:41.922 SYMLINK libspdk_notify.so 00:02:41.922 SYMLINK libspdk_keyring.so 00:02:42.181 SYMLINK libspdk_trace.so 00:02:42.440 CC lib/sock/sock.o 00:02:42.440 CC lib/sock/sock_rpc.o 00:02:42.440 CC lib/thread/thread.o 00:02:42.440 CC lib/thread/iobuf.o 00:02:42.699 LIB libspdk_sock.a 00:02:42.699 SO libspdk_sock.so.10.0 00:02:42.699 SYMLINK libspdk_sock.so 00:02:42.959 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.959 CC lib/nvme/nvme_ctrlr.o 00:02:42.959 CC lib/nvme/nvme_fabric.o 00:02:42.959 CC lib/nvme/nvme_ns_cmd.o 00:02:42.959 CC lib/nvme/nvme_ns.o 00:02:42.959 CC lib/nvme/nvme_pcie_common.o 00:02:42.959 CC lib/nvme/nvme_qpair.o 00:02:42.959 CC lib/nvme/nvme_pcie.o 00:02:43.218 CC lib/nvme/nvme.o 00:02:43.218 CC lib/nvme/nvme_quirks.o 00:02:43.218 CC lib/nvme/nvme_transport.o 00:02:43.218 CC lib/nvme/nvme_discovery.o 00:02:43.218 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:43.218 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:43.218 CC lib/nvme/nvme_tcp.o 00:02:43.218 CC lib/nvme/nvme_opal.o 00:02:43.218 CC lib/nvme/nvme_io_msg.o 00:02:43.218 CC lib/nvme/nvme_poll_group.o 00:02:43.218 CC lib/nvme/nvme_zns.o 00:02:43.218 CC lib/nvme/nvme_stubs.o 00:02:43.218 CC lib/nvme/nvme_auth.o 00:02:43.218 CC lib/nvme/nvme_cuse.o 00:02:43.218 CC lib/nvme/nvme_vfio_user.o 00:02:43.218 CC lib/nvme/nvme_rdma.o 00:02:43.477 LIB libspdk_thread.a 00:02:43.477 SO libspdk_thread.so.11.0 00:02:43.477 SYMLINK libspdk_thread.so 00:02:43.736 CC lib/virtio/virtio_vhost_user.o 00:02:43.736 CC lib/virtio/virtio.o 00:02:43.736 CC lib/fsdev/fsdev_io.o 00:02:43.736 CC lib/fsdev/fsdev.o 00:02:43.736 CC lib/fsdev/fsdev_rpc.o 00:02:43.736 CC lib/virtio/virtio_vfio_user.o 00:02:43.736 CC lib/virtio/virtio_pci.o 00:02:43.736 CC lib/init/json_config.o 00:02:43.736 CC lib/vfu_tgt/tgt_endpoint.o 00:02:43.736 CC lib/init/subsystem.o 00:02:43.736 CC lib/vfu_tgt/tgt_rpc.o 00:02:43.736 CC lib/blob/blobstore.o 00:02:43.736 CC lib/init/subsystem_rpc.o 00:02:43.736 CC lib/blob/request.o 00:02:43.736 CC lib/init/rpc.o 00:02:43.736 CC lib/blob/blob_bs_dev.o 00:02:43.736 CC lib/blob/zeroes.o 00:02:43.736 CC lib/accel/accel.o 00:02:43.736 CC lib/accel/accel_rpc.o 00:02:43.736 CC lib/accel/accel_sw.o 00:02:43.995 LIB libspdk_init.a 00:02:43.995 SO libspdk_init.so.6.0 00:02:44.254 LIB libspdk_virtio.a 00:02:44.254 LIB libspdk_vfu_tgt.a 00:02:44.254 SO libspdk_virtio.so.7.0 00:02:44.254 SYMLINK libspdk_init.so 00:02:44.254 SO libspdk_vfu_tgt.so.3.0 00:02:44.254 SYMLINK libspdk_virtio.so 00:02:44.254 SYMLINK libspdk_vfu_tgt.so 00:02:44.254 LIB libspdk_fsdev.a 00:02:44.512 SO libspdk_fsdev.so.2.0 00:02:44.512 SYMLINK libspdk_fsdev.so 00:02:44.512 CC lib/event/app.o 00:02:44.512 CC lib/event/reactor.o 00:02:44.512 CC lib/event/log_rpc.o 00:02:44.512 CC lib/event/app_rpc.o 00:02:44.512 CC lib/event/scheduler_static.o 00:02:44.771 LIB libspdk_accel.a 00:02:44.771 SO libspdk_accel.so.16.0 00:02:44.771 LIB libspdk_nvme.a 00:02:44.771 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:44.771 SYMLINK libspdk_accel.so 00:02:44.772 LIB libspdk_event.a 00:02:44.772 SO libspdk_nvme.so.15.0 00:02:44.772 SO libspdk_event.so.14.0 00:02:45.031 SYMLINK libspdk_event.so 00:02:45.031 SYMLINK libspdk_nvme.so 00:02:45.031 CC lib/bdev/bdev.o 00:02:45.031 CC lib/bdev/bdev_rpc.o 00:02:45.031 CC lib/bdev/bdev_zone.o 00:02:45.031 CC lib/bdev/part.o 00:02:45.031 CC lib/bdev/scsi_nvme.o 00:02:45.290 LIB libspdk_fuse_dispatcher.a 00:02:45.290 SO libspdk_fuse_dispatcher.so.1.0 00:02:45.290 SYMLINK libspdk_fuse_dispatcher.so 00:02:46.228 LIB libspdk_blob.a 00:02:46.228 SO libspdk_blob.so.11.0 00:02:46.228 SYMLINK libspdk_blob.so 00:02:46.487 CC lib/blobfs/blobfs.o 00:02:46.487 CC lib/blobfs/tree.o 00:02:46.487 CC lib/lvol/lvol.o 00:02:47.054 LIB libspdk_bdev.a 00:02:47.054 SO libspdk_bdev.so.17.0 00:02:47.054 LIB libspdk_blobfs.a 00:02:47.054 SYMLINK libspdk_bdev.so 00:02:47.054 SO libspdk_blobfs.so.10.0 00:02:47.054 LIB libspdk_lvol.a 00:02:47.054 SYMLINK libspdk_blobfs.so 00:02:47.054 SO libspdk_lvol.so.10.0 00:02:47.313 SYMLINK libspdk_lvol.so 00:02:47.313 CC lib/nvmf/ctrlr.o 00:02:47.313 CC lib/nvmf/ctrlr_discovery.o 00:02:47.313 CC lib/nvmf/ctrlr_bdev.o 00:02:47.313 CC lib/nvmf/subsystem.o 00:02:47.313 CC lib/nvmf/nvmf.o 00:02:47.313 CC lib/nvmf/nvmf_rpc.o 00:02:47.313 CC lib/nvmf/transport.o 00:02:47.313 CC lib/nvmf/tcp.o 00:02:47.313 CC lib/nvmf/stubs.o 00:02:47.313 CC lib/nvmf/mdns_server.o 00:02:47.313 CC lib/nvmf/vfio_user.o 00:02:47.313 CC lib/nbd/nbd.o 00:02:47.313 CC lib/nvmf/rdma.o 00:02:47.313 CC lib/nbd/nbd_rpc.o 00:02:47.313 CC lib/ftl/ftl_core.o 00:02:47.313 CC lib/nvmf/auth.o 00:02:47.313 CC lib/ublk/ublk.o 00:02:47.313 CC lib/ftl/ftl_init.o 00:02:47.313 CC lib/ublk/ublk_rpc.o 00:02:47.313 CC lib/scsi/dev.o 00:02:47.313 CC lib/ftl/ftl_layout.o 00:02:47.313 CC lib/ftl/ftl_debug.o 00:02:47.313 CC lib/scsi/lun.o 00:02:47.313 CC lib/ftl/ftl_io.o 00:02:47.313 CC lib/ftl/ftl_sb.o 00:02:47.313 CC lib/scsi/port.o 00:02:47.313 CC lib/ftl/ftl_l2p.o 00:02:47.313 CC lib/scsi/scsi.o 00:02:47.313 CC lib/ftl/ftl_l2p_flat.o 00:02:47.313 CC lib/scsi/scsi_bdev.o 00:02:47.313 CC lib/ftl/ftl_band.o 00:02:47.313 CC lib/ftl/ftl_nv_cache.o 00:02:47.313 CC lib/scsi/scsi_rpc.o 00:02:47.313 CC lib/scsi/scsi_pr.o 00:02:47.313 CC lib/scsi/task.o 00:02:47.313 CC lib/ftl/ftl_band_ops.o 00:02:47.313 CC lib/ftl/ftl_writer.o 00:02:47.313 CC lib/ftl/ftl_rq.o 00:02:47.313 CC lib/ftl/ftl_reloc.o 00:02:47.313 CC lib/ftl/ftl_p2l.o 00:02:47.313 CC lib/ftl/ftl_l2p_cache.o 00:02:47.313 CC lib/ftl/ftl_p2l_log.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:47.313 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:47.313 CC lib/ftl/utils/ftl_conf.o 00:02:47.313 CC lib/ftl/utils/ftl_md.o 00:02:47.313 CC lib/ftl/utils/ftl_mempool.o 00:02:47.313 CC lib/ftl/utils/ftl_bitmap.o 00:02:47.313 CC lib/ftl/utils/ftl_property.o 00:02:47.313 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:47.313 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:47.313 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:47.313 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:47.313 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:47.572 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:47.572 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:47.572 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:47.572 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:47.572 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:47.572 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:47.572 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:47.572 CC lib/ftl/base/ftl_base_dev.o 00:02:47.572 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:47.572 CC lib/ftl/base/ftl_base_bdev.o 00:02:47.572 CC lib/ftl/ftl_trace.o 00:02:48.139 LIB libspdk_nbd.a 00:02:48.139 LIB libspdk_scsi.a 00:02:48.139 SO libspdk_nbd.so.7.0 00:02:48.139 SO libspdk_scsi.so.9.0 00:02:48.139 LIB libspdk_ublk.a 00:02:48.139 SYMLINK libspdk_nbd.so 00:02:48.139 SO libspdk_ublk.so.3.0 00:02:48.139 SYMLINK libspdk_scsi.so 00:02:48.139 SYMLINK libspdk_ublk.so 00:02:48.398 LIB libspdk_ftl.a 00:02:48.398 CC lib/iscsi/conn.o 00:02:48.398 CC lib/iscsi/init_grp.o 00:02:48.398 CC lib/vhost/vhost.o 00:02:48.398 CC lib/iscsi/iscsi.o 00:02:48.398 CC lib/vhost/vhost_rpc.o 00:02:48.398 CC lib/iscsi/param.o 00:02:48.398 CC lib/vhost/vhost_scsi.o 00:02:48.398 CC lib/iscsi/portal_grp.o 00:02:48.398 CC lib/vhost/vhost_blk.o 00:02:48.398 CC lib/iscsi/tgt_node.o 00:02:48.398 CC lib/vhost/rte_vhost_user.o 00:02:48.398 CC lib/iscsi/iscsi_subsystem.o 00:02:48.398 CC lib/iscsi/iscsi_rpc.o 00:02:48.398 CC lib/iscsi/task.o 00:02:48.398 SO libspdk_ftl.so.9.0 00:02:48.657 SYMLINK libspdk_ftl.so 00:02:49.226 LIB libspdk_nvmf.a 00:02:49.226 SO libspdk_nvmf.so.20.0 00:02:49.226 LIB libspdk_vhost.a 00:02:49.226 SO libspdk_vhost.so.8.0 00:02:49.226 SYMLINK libspdk_nvmf.so 00:02:49.226 SYMLINK libspdk_vhost.so 00:02:49.485 LIB libspdk_iscsi.a 00:02:49.485 SO libspdk_iscsi.so.8.0 00:02:49.485 SYMLINK libspdk_iscsi.so 00:02:50.053 CC module/vfu_device/vfu_virtio.o 00:02:50.053 CC module/vfu_device/vfu_virtio_blk.o 00:02:50.053 CC module/vfu_device/vfu_virtio_scsi.o 00:02:50.053 CC module/vfu_device/vfu_virtio_rpc.o 00:02:50.053 CC module/vfu_device/vfu_virtio_fs.o 00:02:50.053 CC module/env_dpdk/env_dpdk_rpc.o 00:02:50.312 CC module/scheduler/gscheduler/gscheduler.o 00:02:50.312 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:50.312 CC module/keyring/linux/keyring.o 00:02:50.312 CC module/keyring/linux/keyring_rpc.o 00:02:50.312 CC module/keyring/file/keyring.o 00:02:50.312 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:50.312 CC module/fsdev/aio/fsdev_aio.o 00:02:50.312 CC module/keyring/file/keyring_rpc.o 00:02:50.312 CC module/fsdev/aio/linux_aio_mgr.o 00:02:50.312 CC module/accel/dsa/accel_dsa.o 00:02:50.312 CC module/accel/dsa/accel_dsa_rpc.o 00:02:50.312 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:50.312 CC module/accel/ioat/accel_ioat.o 00:02:50.312 CC module/accel/iaa/accel_iaa.o 00:02:50.312 LIB libspdk_env_dpdk_rpc.a 00:02:50.312 CC module/accel/ioat/accel_ioat_rpc.o 00:02:50.312 CC module/accel/error/accel_error.o 00:02:50.312 CC module/accel/error/accel_error_rpc.o 00:02:50.312 CC module/sock/posix/posix.o 00:02:50.312 CC module/accel/iaa/accel_iaa_rpc.o 00:02:50.312 CC module/blob/bdev/blob_bdev.o 00:02:50.312 SO libspdk_env_dpdk_rpc.so.6.0 00:02:50.312 SYMLINK libspdk_env_dpdk_rpc.so 00:02:50.312 LIB libspdk_keyring_linux.a 00:02:50.312 LIB libspdk_scheduler_gscheduler.a 00:02:50.312 LIB libspdk_scheduler_dpdk_governor.a 00:02:50.312 LIB libspdk_keyring_file.a 00:02:50.312 SO libspdk_scheduler_gscheduler.so.4.0 00:02:50.312 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:50.571 SO libspdk_keyring_linux.so.1.0 00:02:50.571 SO libspdk_keyring_file.so.2.0 00:02:50.571 LIB libspdk_scheduler_dynamic.a 00:02:50.571 LIB libspdk_accel_iaa.a 00:02:50.571 LIB libspdk_accel_error.a 00:02:50.571 LIB libspdk_accel_ioat.a 00:02:50.571 SYMLINK libspdk_keyring_linux.so 00:02:50.571 SO libspdk_scheduler_dynamic.so.4.0 00:02:50.571 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:50.571 SYMLINK libspdk_scheduler_gscheduler.so 00:02:50.571 SYMLINK libspdk_keyring_file.so 00:02:50.571 SO libspdk_accel_iaa.so.3.0 00:02:50.571 SO libspdk_accel_ioat.so.6.0 00:02:50.571 SO libspdk_accel_error.so.2.0 00:02:50.571 LIB libspdk_blob_bdev.a 00:02:50.571 LIB libspdk_accel_dsa.a 00:02:50.571 SYMLINK libspdk_scheduler_dynamic.so 00:02:50.571 SYMLINK libspdk_accel_error.so 00:02:50.571 SO libspdk_blob_bdev.so.11.0 00:02:50.571 SYMLINK libspdk_accel_iaa.so 00:02:50.571 SYMLINK libspdk_accel_ioat.so 00:02:50.571 SO libspdk_accel_dsa.so.5.0 00:02:50.571 LIB libspdk_vfu_device.a 00:02:50.571 SYMLINK libspdk_blob_bdev.so 00:02:50.571 SYMLINK libspdk_accel_dsa.so 00:02:50.571 SO libspdk_vfu_device.so.3.0 00:02:50.830 SYMLINK libspdk_vfu_device.so 00:02:50.830 LIB libspdk_fsdev_aio.a 00:02:50.830 SO libspdk_fsdev_aio.so.1.0 00:02:50.830 LIB libspdk_sock_posix.a 00:02:50.830 SO libspdk_sock_posix.so.6.0 00:02:50.830 SYMLINK libspdk_fsdev_aio.so 00:02:51.088 SYMLINK libspdk_sock_posix.so 00:02:51.088 CC module/bdev/error/vbdev_error_rpc.o 00:02:51.088 CC module/bdev/error/vbdev_error.o 00:02:51.088 CC module/bdev/gpt/gpt.o 00:02:51.088 CC module/bdev/gpt/vbdev_gpt.o 00:02:51.088 CC module/bdev/delay/vbdev_delay.o 00:02:51.088 CC module/bdev/malloc/bdev_malloc.o 00:02:51.088 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:51.088 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:51.088 CC module/bdev/ftl/bdev_ftl.o 00:02:51.088 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:51.088 CC module/bdev/passthru/vbdev_passthru.o 00:02:51.088 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:51.088 CC module/bdev/null/bdev_null_rpc.o 00:02:51.088 CC module/bdev/null/bdev_null.o 00:02:51.088 CC module/blobfs/bdev/blobfs_bdev.o 00:02:51.088 CC module/bdev/split/vbdev_split.o 00:02:51.088 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:51.088 CC module/bdev/lvol/vbdev_lvol.o 00:02:51.088 CC module/bdev/split/vbdev_split_rpc.o 00:02:51.088 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:51.088 CC module/bdev/aio/bdev_aio.o 00:02:51.088 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:51.088 CC module/bdev/raid/bdev_raid.o 00:02:51.088 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:51.088 CC module/bdev/aio/bdev_aio_rpc.o 00:02:51.088 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:51.088 CC module/bdev/raid/bdev_raid_rpc.o 00:02:51.088 CC module/bdev/iscsi/bdev_iscsi.o 00:02:51.088 CC module/bdev/raid/bdev_raid_sb.o 00:02:51.088 CC module/bdev/raid/raid0.o 00:02:51.088 CC module/bdev/nvme/bdev_nvme.o 00:02:51.088 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:51.088 CC module/bdev/raid/raid1.o 00:02:51.088 CC module/bdev/raid/concat.o 00:02:51.088 CC module/bdev/nvme/nvme_rpc.o 00:02:51.088 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:51.088 CC module/bdev/nvme/bdev_mdns_client.o 00:02:51.088 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:51.088 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:51.088 CC module/bdev/nvme/vbdev_opal.o 00:02:51.088 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:51.088 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:51.345 LIB libspdk_blobfs_bdev.a 00:02:51.345 LIB libspdk_bdev_split.a 00:02:51.345 LIB libspdk_bdev_gpt.a 00:02:51.345 SO libspdk_blobfs_bdev.so.6.0 00:02:51.345 LIB libspdk_bdev_ftl.a 00:02:51.345 SO libspdk_bdev_gpt.so.6.0 00:02:51.345 SO libspdk_bdev_split.so.6.0 00:02:51.345 LIB libspdk_bdev_error.a 00:02:51.345 SO libspdk_bdev_ftl.so.6.0 00:02:51.345 SO libspdk_bdev_error.so.6.0 00:02:51.345 SYMLINK libspdk_blobfs_bdev.so 00:02:51.345 LIB libspdk_bdev_null.a 00:02:51.346 SYMLINK libspdk_bdev_gpt.so 00:02:51.346 LIB libspdk_bdev_passthru.a 00:02:51.346 SYMLINK libspdk_bdev_split.so 00:02:51.346 LIB libspdk_bdev_malloc.a 00:02:51.346 SO libspdk_bdev_passthru.so.6.0 00:02:51.346 SO libspdk_bdev_null.so.6.0 00:02:51.604 LIB libspdk_bdev_iscsi.a 00:02:51.604 SYMLINK libspdk_bdev_ftl.so 00:02:51.604 LIB libspdk_bdev_delay.a 00:02:51.604 LIB libspdk_bdev_aio.a 00:02:51.604 SYMLINK libspdk_bdev_error.so 00:02:51.604 LIB libspdk_bdev_zone_block.a 00:02:51.604 SO libspdk_bdev_malloc.so.6.0 00:02:51.604 SO libspdk_bdev_delay.so.6.0 00:02:51.604 SO libspdk_bdev_iscsi.so.6.0 00:02:51.604 SO libspdk_bdev_aio.so.6.0 00:02:51.604 SYMLINK libspdk_bdev_null.so 00:02:51.604 SO libspdk_bdev_zone_block.so.6.0 00:02:51.604 SYMLINK libspdk_bdev_passthru.so 00:02:51.604 SYMLINK libspdk_bdev_malloc.so 00:02:51.604 SYMLINK libspdk_bdev_delay.so 00:02:51.604 SYMLINK libspdk_bdev_iscsi.so 00:02:51.605 SYMLINK libspdk_bdev_aio.so 00:02:51.605 LIB libspdk_bdev_lvol.a 00:02:51.605 SYMLINK libspdk_bdev_zone_block.so 00:02:51.605 LIB libspdk_bdev_virtio.a 00:02:51.605 SO libspdk_bdev_lvol.so.6.0 00:02:51.605 SO libspdk_bdev_virtio.so.6.0 00:02:51.605 SYMLINK libspdk_bdev_lvol.so 00:02:51.864 SYMLINK libspdk_bdev_virtio.so 00:02:51.864 LIB libspdk_bdev_raid.a 00:02:51.864 SO libspdk_bdev_raid.so.6.0 00:02:52.123 SYMLINK libspdk_bdev_raid.so 00:02:53.061 LIB libspdk_bdev_nvme.a 00:02:53.061 SO libspdk_bdev_nvme.so.7.1 00:02:53.061 SYMLINK libspdk_bdev_nvme.so 00:02:53.629 CC module/event/subsystems/iobuf/iobuf.o 00:02:53.629 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:53.629 CC module/event/subsystems/vmd/vmd.o 00:02:53.629 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:53.629 CC module/event/subsystems/scheduler/scheduler.o 00:02:53.629 CC module/event/subsystems/keyring/keyring.o 00:02:53.629 CC module/event/subsystems/sock/sock.o 00:02:53.629 CC module/event/subsystems/fsdev/fsdev.o 00:02:53.629 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:53.887 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:53.887 LIB libspdk_event_vmd.a 00:02:53.887 LIB libspdk_event_vhost_blk.a 00:02:53.887 LIB libspdk_event_vfu_tgt.a 00:02:53.887 LIB libspdk_event_iobuf.a 00:02:53.887 LIB libspdk_event_keyring.a 00:02:53.887 LIB libspdk_event_sock.a 00:02:53.887 LIB libspdk_event_scheduler.a 00:02:53.887 LIB libspdk_event_fsdev.a 00:02:53.887 SO libspdk_event_vmd.so.6.0 00:02:53.887 SO libspdk_event_vfu_tgt.so.3.0 00:02:53.887 SO libspdk_event_vhost_blk.so.3.0 00:02:53.887 SO libspdk_event_keyring.so.1.0 00:02:53.887 SO libspdk_event_iobuf.so.3.0 00:02:53.887 SO libspdk_event_sock.so.5.0 00:02:53.887 SO libspdk_event_scheduler.so.4.0 00:02:53.887 SO libspdk_event_fsdev.so.1.0 00:02:53.887 SYMLINK libspdk_event_vfu_tgt.so 00:02:53.887 SYMLINK libspdk_event_vmd.so 00:02:53.887 SYMLINK libspdk_event_vhost_blk.so 00:02:53.887 SYMLINK libspdk_event_keyring.so 00:02:53.887 SYMLINK libspdk_event_iobuf.so 00:02:53.887 SYMLINK libspdk_event_scheduler.so 00:02:53.887 SYMLINK libspdk_event_sock.so 00:02:53.887 SYMLINK libspdk_event_fsdev.so 00:02:54.455 CC module/event/subsystems/accel/accel.o 00:02:54.455 LIB libspdk_event_accel.a 00:02:54.455 SO libspdk_event_accel.so.6.0 00:02:54.455 SYMLINK libspdk_event_accel.so 00:02:54.714 CC module/event/subsystems/bdev/bdev.o 00:02:54.973 LIB libspdk_event_bdev.a 00:02:54.973 SO libspdk_event_bdev.so.6.0 00:02:54.973 SYMLINK libspdk_event_bdev.so 00:02:55.539 CC module/event/subsystems/scsi/scsi.o 00:02:55.539 CC module/event/subsystems/ublk/ublk.o 00:02:55.539 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:55.539 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:55.539 CC module/event/subsystems/nbd/nbd.o 00:02:55.539 LIB libspdk_event_ublk.a 00:02:55.539 LIB libspdk_event_nbd.a 00:02:55.539 LIB libspdk_event_scsi.a 00:02:55.539 SO libspdk_event_nbd.so.6.0 00:02:55.539 SO libspdk_event_ublk.so.3.0 00:02:55.539 SO libspdk_event_scsi.so.6.0 00:02:55.539 LIB libspdk_event_nvmf.a 00:02:55.539 SYMLINK libspdk_event_nbd.so 00:02:55.539 SYMLINK libspdk_event_ublk.so 00:02:55.539 SYMLINK libspdk_event_scsi.so 00:02:55.539 SO libspdk_event_nvmf.so.6.0 00:02:55.797 SYMLINK libspdk_event_nvmf.so 00:02:56.055 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:56.055 CC module/event/subsystems/iscsi/iscsi.o 00:02:56.055 LIB libspdk_event_vhost_scsi.a 00:02:56.055 LIB libspdk_event_iscsi.a 00:02:56.055 SO libspdk_event_vhost_scsi.so.3.0 00:02:56.055 SO libspdk_event_iscsi.so.6.0 00:02:56.313 SYMLINK libspdk_event_vhost_scsi.so 00:02:56.313 SYMLINK libspdk_event_iscsi.so 00:02:56.313 SO libspdk.so.6.0 00:02:56.313 SYMLINK libspdk.so 00:02:56.882 CC test/rpc_client/rpc_client_test.o 00:02:56.882 CC app/spdk_nvme_discover/discovery_aer.o 00:02:56.882 CC app/trace_record/trace_record.o 00:02:56.882 CXX app/trace/trace.o 00:02:56.882 CC app/spdk_nvme_perf/perf.o 00:02:56.882 TEST_HEADER include/spdk/accel.h 00:02:56.882 TEST_HEADER include/spdk/accel_module.h 00:02:56.882 TEST_HEADER include/spdk/assert.h 00:02:56.882 CC app/spdk_lspci/spdk_lspci.o 00:02:56.882 TEST_HEADER include/spdk/barrier.h 00:02:56.882 TEST_HEADER include/spdk/base64.h 00:02:56.882 TEST_HEADER include/spdk/bdev.h 00:02:56.882 TEST_HEADER include/spdk/bdev_module.h 00:02:56.882 TEST_HEADER include/spdk/bdev_zone.h 00:02:56.882 TEST_HEADER include/spdk/bit_array.h 00:02:56.882 TEST_HEADER include/spdk/bit_pool.h 00:02:56.882 TEST_HEADER include/spdk/blob_bdev.h 00:02:56.882 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:56.882 CC app/spdk_top/spdk_top.o 00:02:56.882 CC app/spdk_nvme_identify/identify.o 00:02:56.882 TEST_HEADER include/spdk/blobfs.h 00:02:56.882 TEST_HEADER include/spdk/blob.h 00:02:56.882 TEST_HEADER include/spdk/conf.h 00:02:56.882 TEST_HEADER include/spdk/cpuset.h 00:02:56.882 TEST_HEADER include/spdk/config.h 00:02:56.882 TEST_HEADER include/spdk/crc16.h 00:02:56.882 TEST_HEADER include/spdk/crc32.h 00:02:56.882 TEST_HEADER include/spdk/crc64.h 00:02:56.882 TEST_HEADER include/spdk/dif.h 00:02:56.882 TEST_HEADER include/spdk/dma.h 00:02:56.882 TEST_HEADER include/spdk/endian.h 00:02:56.882 TEST_HEADER include/spdk/env_dpdk.h 00:02:56.882 TEST_HEADER include/spdk/env.h 00:02:56.882 CC app/spdk_dd/spdk_dd.o 00:02:56.882 TEST_HEADER include/spdk/fd_group.h 00:02:56.882 TEST_HEADER include/spdk/event.h 00:02:56.882 TEST_HEADER include/spdk/fd.h 00:02:56.882 TEST_HEADER include/spdk/file.h 00:02:56.882 TEST_HEADER include/spdk/fsdev_module.h 00:02:56.882 TEST_HEADER include/spdk/fsdev.h 00:02:56.882 TEST_HEADER include/spdk/ftl.h 00:02:56.882 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:56.882 TEST_HEADER include/spdk/gpt_spec.h 00:02:56.882 TEST_HEADER include/spdk/histogram_data.h 00:02:56.882 TEST_HEADER include/spdk/hexlify.h 00:02:56.882 TEST_HEADER include/spdk/idxd.h 00:02:56.882 TEST_HEADER include/spdk/init.h 00:02:56.882 TEST_HEADER include/spdk/idxd_spec.h 00:02:56.882 TEST_HEADER include/spdk/ioat.h 00:02:56.882 TEST_HEADER include/spdk/ioat_spec.h 00:02:56.882 TEST_HEADER include/spdk/jsonrpc.h 00:02:56.882 TEST_HEADER include/spdk/iscsi_spec.h 00:02:56.882 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:56.882 TEST_HEADER include/spdk/json.h 00:02:56.882 TEST_HEADER include/spdk/keyring.h 00:02:56.882 TEST_HEADER include/spdk/keyring_module.h 00:02:56.882 TEST_HEADER include/spdk/likely.h 00:02:56.882 TEST_HEADER include/spdk/lvol.h 00:02:56.882 TEST_HEADER include/spdk/log.h 00:02:56.882 TEST_HEADER include/spdk/memory.h 00:02:56.882 CC app/iscsi_tgt/iscsi_tgt.o 00:02:56.882 TEST_HEADER include/spdk/md5.h 00:02:56.882 TEST_HEADER include/spdk/mmio.h 00:02:56.882 TEST_HEADER include/spdk/net.h 00:02:56.882 TEST_HEADER include/spdk/nbd.h 00:02:56.882 TEST_HEADER include/spdk/notify.h 00:02:56.882 TEST_HEADER include/spdk/nvme.h 00:02:56.882 TEST_HEADER include/spdk/nvme_intel.h 00:02:56.882 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:56.882 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:56.882 CC app/nvmf_tgt/nvmf_main.o 00:02:56.882 TEST_HEADER include/spdk/nvme_spec.h 00:02:56.882 TEST_HEADER include/spdk/nvme_zns.h 00:02:56.882 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:56.882 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:56.882 TEST_HEADER include/spdk/nvmf_spec.h 00:02:56.882 TEST_HEADER include/spdk/nvmf_transport.h 00:02:56.882 TEST_HEADER include/spdk/nvmf.h 00:02:56.882 TEST_HEADER include/spdk/opal_spec.h 00:02:56.882 TEST_HEADER include/spdk/opal.h 00:02:56.882 TEST_HEADER include/spdk/pipe.h 00:02:56.882 TEST_HEADER include/spdk/pci_ids.h 00:02:56.882 TEST_HEADER include/spdk/queue.h 00:02:56.882 TEST_HEADER include/spdk/reduce.h 00:02:56.882 TEST_HEADER include/spdk/scheduler.h 00:02:56.882 TEST_HEADER include/spdk/scsi.h 00:02:56.882 TEST_HEADER include/spdk/rpc.h 00:02:56.882 TEST_HEADER include/spdk/sock.h 00:02:56.882 CC app/spdk_tgt/spdk_tgt.o 00:02:56.882 TEST_HEADER include/spdk/stdinc.h 00:02:56.882 TEST_HEADER include/spdk/scsi_spec.h 00:02:56.882 TEST_HEADER include/spdk/string.h 00:02:56.882 TEST_HEADER include/spdk/trace.h 00:02:56.882 TEST_HEADER include/spdk/trace_parser.h 00:02:56.882 TEST_HEADER include/spdk/thread.h 00:02:56.882 TEST_HEADER include/spdk/tree.h 00:02:56.882 TEST_HEADER include/spdk/ublk.h 00:02:56.883 TEST_HEADER include/spdk/util.h 00:02:56.883 TEST_HEADER include/spdk/uuid.h 00:02:56.883 TEST_HEADER include/spdk/version.h 00:02:56.883 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:56.883 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:56.883 TEST_HEADER include/spdk/vhost.h 00:02:56.883 TEST_HEADER include/spdk/vmd.h 00:02:56.883 TEST_HEADER include/spdk/xor.h 00:02:56.883 TEST_HEADER include/spdk/zipf.h 00:02:56.883 CXX test/cpp_headers/accel.o 00:02:56.883 CXX test/cpp_headers/accel_module.o 00:02:56.883 CXX test/cpp_headers/assert.o 00:02:56.883 CXX test/cpp_headers/barrier.o 00:02:56.883 CXX test/cpp_headers/bdev_module.o 00:02:56.883 CXX test/cpp_headers/bdev.o 00:02:56.883 CXX test/cpp_headers/base64.o 00:02:56.883 CXX test/cpp_headers/bit_array.o 00:02:56.883 CXX test/cpp_headers/bit_pool.o 00:02:56.883 CXX test/cpp_headers/bdev_zone.o 00:02:56.883 CXX test/cpp_headers/blob_bdev.o 00:02:56.883 CXX test/cpp_headers/blobfs_bdev.o 00:02:56.883 CXX test/cpp_headers/blobfs.o 00:02:56.883 CXX test/cpp_headers/conf.o 00:02:56.883 CXX test/cpp_headers/config.o 00:02:56.883 CXX test/cpp_headers/blob.o 00:02:56.883 CXX test/cpp_headers/cpuset.o 00:02:56.883 CXX test/cpp_headers/crc16.o 00:02:56.883 CXX test/cpp_headers/crc32.o 00:02:56.883 CXX test/cpp_headers/dma.o 00:02:56.883 CXX test/cpp_headers/crc64.o 00:02:56.883 CXX test/cpp_headers/dif.o 00:02:56.883 CXX test/cpp_headers/endian.o 00:02:56.883 CXX test/cpp_headers/env.o 00:02:56.883 CXX test/cpp_headers/event.o 00:02:56.883 CXX test/cpp_headers/fd.o 00:02:56.883 CXX test/cpp_headers/env_dpdk.o 00:02:56.883 CXX test/cpp_headers/fd_group.o 00:02:56.883 CXX test/cpp_headers/file.o 00:02:56.883 CXX test/cpp_headers/fsdev.o 00:02:56.883 CXX test/cpp_headers/ftl.o 00:02:56.883 CXX test/cpp_headers/fsdev_module.o 00:02:56.883 CXX test/cpp_headers/fuse_dispatcher.o 00:02:56.883 CXX test/cpp_headers/idxd.o 00:02:56.883 CXX test/cpp_headers/histogram_data.o 00:02:56.883 CXX test/cpp_headers/gpt_spec.o 00:02:56.883 CXX test/cpp_headers/idxd_spec.o 00:02:56.883 CXX test/cpp_headers/hexlify.o 00:02:56.883 CXX test/cpp_headers/ioat.o 00:02:56.883 CXX test/cpp_headers/init.o 00:02:56.883 CXX test/cpp_headers/ioat_spec.o 00:02:56.883 CXX test/cpp_headers/json.o 00:02:56.883 CXX test/cpp_headers/iscsi_spec.o 00:02:56.883 CXX test/cpp_headers/keyring.o 00:02:56.883 CXX test/cpp_headers/jsonrpc.o 00:02:56.883 CXX test/cpp_headers/keyring_module.o 00:02:56.883 CXX test/cpp_headers/likely.o 00:02:56.883 CXX test/cpp_headers/log.o 00:02:56.883 CXX test/cpp_headers/lvol.o 00:02:56.883 CXX test/cpp_headers/memory.o 00:02:56.883 CXX test/cpp_headers/md5.o 00:02:56.883 CXX test/cpp_headers/nbd.o 00:02:56.883 CXX test/cpp_headers/mmio.o 00:02:56.883 CXX test/cpp_headers/net.o 00:02:56.883 CXX test/cpp_headers/notify.o 00:02:56.883 CXX test/cpp_headers/nvme.o 00:02:56.883 CXX test/cpp_headers/nvme_intel.o 00:02:56.883 CXX test/cpp_headers/nvme_ocssd.o 00:02:56.883 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:56.883 CXX test/cpp_headers/nvme_spec.o 00:02:56.883 CXX test/cpp_headers/nvme_zns.o 00:02:56.883 CXX test/cpp_headers/nvmf_cmd.o 00:02:56.883 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:56.883 CXX test/cpp_headers/nvmf.o 00:02:56.883 CXX test/cpp_headers/nvmf_spec.o 00:02:56.883 CXX test/cpp_headers/nvmf_transport.o 00:02:56.883 CXX test/cpp_headers/opal.o 00:02:56.883 CC examples/ioat/perf/perf.o 00:02:56.883 CC test/app/histogram_perf/histogram_perf.o 00:02:56.883 CC test/env/memory/memory_ut.o 00:02:56.883 CC examples/ioat/verify/verify.o 00:02:56.883 CC test/env/vtophys/vtophys.o 00:02:56.883 CC test/app/jsoncat/jsoncat.o 00:02:56.883 CC test/app/stub/stub.o 00:02:56.883 CC test/thread/poller_perf/poller_perf.o 00:02:56.883 CC test/env/pci/pci_ut.o 00:02:56.883 CC app/fio/nvme/fio_plugin.o 00:02:56.883 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:56.883 CC examples/util/zipf/zipf.o 00:02:57.150 CC app/fio/bdev/fio_plugin.o 00:02:57.150 CC test/app/bdev_svc/bdev_svc.o 00:02:57.150 CC test/dma/test_dma/test_dma.o 00:02:57.150 LINK spdk_lspci 00:02:57.150 LINK rpc_client_test 00:02:57.150 LINK spdk_nvme_discover 00:02:57.416 CC test/env/mem_callbacks/mem_callbacks.o 00:02:57.416 LINK interrupt_tgt 00:02:57.416 LINK nvmf_tgt 00:02:57.416 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:57.416 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:57.416 LINK histogram_perf 00:02:57.416 LINK vtophys 00:02:57.416 LINK spdk_tgt 00:02:57.416 LINK iscsi_tgt 00:02:57.416 LINK zipf 00:02:57.416 LINK spdk_trace_record 00:02:57.416 CXX test/cpp_headers/opal_spec.o 00:02:57.416 CXX test/cpp_headers/pci_ids.o 00:02:57.416 CXX test/cpp_headers/pipe.o 00:02:57.416 CXX test/cpp_headers/queue.o 00:02:57.416 CXX test/cpp_headers/reduce.o 00:02:57.416 LINK poller_perf 00:02:57.416 CXX test/cpp_headers/rpc.o 00:02:57.416 CXX test/cpp_headers/scheduler.o 00:02:57.416 LINK jsoncat 00:02:57.416 CXX test/cpp_headers/scsi.o 00:02:57.416 CXX test/cpp_headers/scsi_spec.o 00:02:57.416 CXX test/cpp_headers/sock.o 00:02:57.416 CXX test/cpp_headers/stdinc.o 00:02:57.416 CXX test/cpp_headers/string.o 00:02:57.416 CXX test/cpp_headers/thread.o 00:02:57.416 CXX test/cpp_headers/trace.o 00:02:57.676 CXX test/cpp_headers/trace_parser.o 00:02:57.676 CXX test/cpp_headers/tree.o 00:02:57.676 LINK verify 00:02:57.676 CXX test/cpp_headers/ublk.o 00:02:57.676 LINK env_dpdk_post_init 00:02:57.676 CXX test/cpp_headers/util.o 00:02:57.676 CXX test/cpp_headers/uuid.o 00:02:57.676 CXX test/cpp_headers/version.o 00:02:57.676 CXX test/cpp_headers/vfio_user_pci.o 00:02:57.676 CXX test/cpp_headers/vfio_user_spec.o 00:02:57.676 CXX test/cpp_headers/vhost.o 00:02:57.676 CXX test/cpp_headers/vmd.o 00:02:57.676 CXX test/cpp_headers/xor.o 00:02:57.676 CXX test/cpp_headers/zipf.o 00:02:57.676 LINK stub 00:02:57.676 LINK ioat_perf 00:02:57.677 LINK bdev_svc 00:02:57.677 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:57.677 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:57.677 LINK spdk_dd 00:02:57.677 LINK spdk_trace 00:02:57.677 LINK pci_ut 00:02:57.936 LINK spdk_bdev 00:02:57.936 LINK test_dma 00:02:57.936 LINK nvme_fuzz 00:02:57.936 CC test/event/event_perf/event_perf.o 00:02:57.936 CC examples/vmd/led/led.o 00:02:57.936 CC test/event/reactor_perf/reactor_perf.o 00:02:57.936 CC examples/vmd/lsvmd/lsvmd.o 00:02:57.936 CC examples/idxd/perf/perf.o 00:02:57.936 CC test/event/reactor/reactor.o 00:02:57.936 LINK spdk_nvme 00:02:57.936 LINK spdk_nvme_perf 00:02:57.936 LINK mem_callbacks 00:02:57.936 CC test/event/app_repeat/app_repeat.o 00:02:57.936 CC test/event/scheduler/scheduler.o 00:02:57.936 CC examples/thread/thread/thread_ex.o 00:02:57.936 CC examples/sock/hello_world/hello_sock.o 00:02:57.936 CC app/vhost/vhost.o 00:02:57.936 LINK vhost_fuzz 00:02:58.194 LINK spdk_nvme_identify 00:02:58.194 LINK spdk_top 00:02:58.194 LINK reactor_perf 00:02:58.194 LINK lsvmd 00:02:58.194 LINK led 00:02:58.194 LINK event_perf 00:02:58.194 LINK reactor 00:02:58.194 LINK app_repeat 00:02:58.194 LINK vhost 00:02:58.194 LINK scheduler 00:02:58.194 LINK hello_sock 00:02:58.194 LINK thread 00:02:58.194 LINK idxd_perf 00:02:58.453 CC test/nvme/overhead/overhead.o 00:02:58.453 CC test/nvme/reserve/reserve.o 00:02:58.453 CC test/nvme/boot_partition/boot_partition.o 00:02:58.453 CC test/nvme/sgl/sgl.o 00:02:58.453 CC test/nvme/reset/reset.o 00:02:58.453 CC test/nvme/connect_stress/connect_stress.o 00:02:58.453 CC test/nvme/startup/startup.o 00:02:58.453 CC test/nvme/aer/aer.o 00:02:58.453 CC test/nvme/e2edp/nvme_dp.o 00:02:58.453 CC test/nvme/compliance/nvme_compliance.o 00:02:58.453 CC test/nvme/simple_copy/simple_copy.o 00:02:58.453 CC test/nvme/fdp/fdp.o 00:02:58.453 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:58.453 CC test/nvme/fused_ordering/fused_ordering.o 00:02:58.453 CC test/nvme/err_injection/err_injection.o 00:02:58.453 CC test/nvme/cuse/cuse.o 00:02:58.453 CC test/blobfs/mkfs/mkfs.o 00:02:58.453 CC test/accel/dif/dif.o 00:02:58.453 LINK memory_ut 00:02:58.453 CC test/lvol/esnap/esnap.o 00:02:58.453 LINK boot_partition 00:02:58.712 LINK reserve 00:02:58.712 LINK connect_stress 00:02:58.712 LINK startup 00:02:58.712 LINK doorbell_aers 00:02:58.712 LINK err_injection 00:02:58.712 LINK reset 00:02:58.712 LINK fused_ordering 00:02:58.712 LINK simple_copy 00:02:58.712 LINK overhead 00:02:58.712 LINK mkfs 00:02:58.712 LINK nvme_dp 00:02:58.712 LINK sgl 00:02:58.712 LINK aer 00:02:58.712 CC examples/nvme/reconnect/reconnect.o 00:02:58.712 CC examples/nvme/abort/abort.o 00:02:58.712 LINK nvme_compliance 00:02:58.712 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:58.712 CC examples/nvme/hello_world/hello_world.o 00:02:58.712 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:58.712 CC examples/nvme/arbitration/arbitration.o 00:02:58.712 LINK fdp 00:02:58.712 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:58.712 CC examples/nvme/hotplug/hotplug.o 00:02:58.712 CC examples/accel/perf/accel_perf.o 00:02:58.712 CC examples/blob/cli/blobcli.o 00:02:58.712 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:58.712 CC examples/blob/hello_world/hello_blob.o 00:02:58.971 LINK cmb_copy 00:02:58.971 LINK pmr_persistence 00:02:58.971 LINK iscsi_fuzz 00:02:58.971 LINK hello_world 00:02:58.971 LINK hotplug 00:02:58.971 LINK arbitration 00:02:58.971 LINK reconnect 00:02:58.971 LINK dif 00:02:58.971 LINK abort 00:02:58.971 LINK hello_blob 00:02:58.971 LINK hello_fsdev 00:02:59.229 LINK nvme_manage 00:02:59.229 LINK accel_perf 00:02:59.229 LINK blobcli 00:02:59.487 LINK cuse 00:02:59.487 CC test/bdev/bdevio/bdevio.o 00:02:59.746 CC examples/bdev/hello_world/hello_bdev.o 00:02:59.746 CC examples/bdev/bdevperf/bdevperf.o 00:02:59.746 LINK bdevio 00:02:59.746 LINK hello_bdev 00:03:00.313 LINK bdevperf 00:03:00.880 CC examples/nvmf/nvmf/nvmf.o 00:03:01.138 LINK nvmf 00:03:02.073 LINK esnap 00:03:02.332 00:03:02.332 real 0m55.713s 00:03:02.332 user 8m0.707s 00:03:02.332 sys 3m39.171s 00:03:02.332 11:14:16 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:02.332 11:14:16 make -- common/autotest_common.sh@10 -- $ set +x 00:03:02.332 ************************************ 00:03:02.332 END TEST make 00:03:02.332 ************************************ 00:03:02.332 11:14:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:02.332 11:14:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:02.332 11:14:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:02.332 11:14:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.332 11:14:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:02.332 11:14:16 -- pm/common@44 -- $ pid=1986244 00:03:02.332 11:14:16 -- pm/common@50 -- $ kill -TERM 1986244 00:03:02.332 11:14:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.332 11:14:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:02.332 11:14:16 -- pm/common@44 -- $ pid=1986246 00:03:02.332 11:14:16 -- pm/common@50 -- $ kill -TERM 1986246 00:03:02.332 11:14:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.332 11:14:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:02.332 11:14:16 -- pm/common@44 -- $ pid=1986247 00:03:02.332 11:14:16 -- pm/common@50 -- $ kill -TERM 1986247 00:03:02.332 11:14:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.332 11:14:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:02.332 11:14:16 -- pm/common@44 -- $ pid=1986273 00:03:02.332 11:14:16 -- pm/common@50 -- $ sudo -E kill -TERM 1986273 00:03:02.592 11:14:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:02.592 11:14:16 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:02.592 11:14:16 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:02.592 11:14:16 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:02.592 11:14:16 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:02.592 11:14:16 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:02.592 11:14:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:02.592 11:14:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:02.592 11:14:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:02.592 11:14:16 -- scripts/common.sh@336 -- # IFS=.-: 00:03:02.592 11:14:16 -- scripts/common.sh@336 -- # read -ra ver1 00:03:02.592 11:14:16 -- scripts/common.sh@337 -- # IFS=.-: 00:03:02.592 11:14:16 -- scripts/common.sh@337 -- # read -ra ver2 00:03:02.592 11:14:16 -- scripts/common.sh@338 -- # local 'op=<' 00:03:02.592 11:14:16 -- scripts/common.sh@340 -- # ver1_l=2 00:03:02.592 11:14:16 -- scripts/common.sh@341 -- # ver2_l=1 00:03:02.592 11:14:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:02.592 11:14:16 -- scripts/common.sh@344 -- # case "$op" in 00:03:02.592 11:14:16 -- scripts/common.sh@345 -- # : 1 00:03:02.592 11:14:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:02.592 11:14:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:02.592 11:14:16 -- scripts/common.sh@365 -- # decimal 1 00:03:02.592 11:14:16 -- scripts/common.sh@353 -- # local d=1 00:03:02.592 11:14:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:02.592 11:14:16 -- scripts/common.sh@355 -- # echo 1 00:03:02.592 11:14:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:02.592 11:14:16 -- scripts/common.sh@366 -- # decimal 2 00:03:02.592 11:14:16 -- scripts/common.sh@353 -- # local d=2 00:03:02.592 11:14:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:02.592 11:14:16 -- scripts/common.sh@355 -- # echo 2 00:03:02.592 11:14:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:02.592 11:14:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:02.592 11:14:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:02.592 11:14:16 -- scripts/common.sh@368 -- # return 0 00:03:02.592 11:14:16 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:02.592 11:14:16 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:02.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.592 --rc genhtml_branch_coverage=1 00:03:02.592 --rc genhtml_function_coverage=1 00:03:02.592 --rc genhtml_legend=1 00:03:02.592 --rc geninfo_all_blocks=1 00:03:02.592 --rc geninfo_unexecuted_blocks=1 00:03:02.592 00:03:02.592 ' 00:03:02.592 11:14:16 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:02.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.592 --rc genhtml_branch_coverage=1 00:03:02.592 --rc genhtml_function_coverage=1 00:03:02.592 --rc genhtml_legend=1 00:03:02.592 --rc geninfo_all_blocks=1 00:03:02.592 --rc geninfo_unexecuted_blocks=1 00:03:02.592 00:03:02.592 ' 00:03:02.592 11:14:16 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:02.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.592 --rc genhtml_branch_coverage=1 00:03:02.592 --rc genhtml_function_coverage=1 00:03:02.593 --rc genhtml_legend=1 00:03:02.593 --rc geninfo_all_blocks=1 00:03:02.593 --rc geninfo_unexecuted_blocks=1 00:03:02.593 00:03:02.593 ' 00:03:02.593 11:14:16 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.593 --rc genhtml_branch_coverage=1 00:03:02.593 --rc genhtml_function_coverage=1 00:03:02.593 --rc genhtml_legend=1 00:03:02.593 --rc geninfo_all_blocks=1 00:03:02.593 --rc geninfo_unexecuted_blocks=1 00:03:02.593 00:03:02.593 ' 00:03:02.593 11:14:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:02.593 11:14:16 -- nvmf/common.sh@7 -- # uname -s 00:03:02.593 11:14:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:02.593 11:14:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:02.593 11:14:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:02.593 11:14:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:02.593 11:14:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:02.593 11:14:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:02.593 11:14:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:02.593 11:14:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:02.593 11:14:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:02.593 11:14:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:02.593 11:14:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:02.593 11:14:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:02.593 11:14:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:02.593 11:14:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:02.593 11:14:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:02.593 11:14:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:02.593 11:14:16 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:02.593 11:14:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:02.593 11:14:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:02.593 11:14:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.593 11:14:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.593 11:14:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.593 11:14:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.593 11:14:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.593 11:14:16 -- paths/export.sh@5 -- # export PATH 00:03:02.593 11:14:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.593 11:14:16 -- nvmf/common.sh@51 -- # : 0 00:03:02.593 11:14:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:02.593 11:14:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:02.593 11:14:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:02.593 11:14:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:02.593 11:14:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:02.593 11:14:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:02.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:02.593 11:14:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:02.593 11:14:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:02.593 11:14:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:02.593 11:14:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:02.593 11:14:16 -- spdk/autotest.sh@32 -- # uname -s 00:03:02.593 11:14:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:02.593 11:14:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:02.593 11:14:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:02.593 11:14:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:02.593 11:14:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:02.593 11:14:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:02.593 11:14:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:02.593 11:14:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:02.593 11:14:16 -- spdk/autotest.sh@48 -- # udevadm_pid=2048705 00:03:02.593 11:14:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:02.593 11:14:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:02.593 11:14:16 -- pm/common@17 -- # local monitor 00:03:02.593 11:14:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.593 11:14:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.593 11:14:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.593 11:14:16 -- pm/common@21 -- # date +%s 00:03:02.593 11:14:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.593 11:14:16 -- pm/common@21 -- # date +%s 00:03:02.593 11:14:16 -- pm/common@25 -- # sleep 1 00:03:02.593 11:14:16 -- pm/common@21 -- # date +%s 00:03:02.593 11:14:16 -- pm/common@21 -- # date +%s 00:03:02.593 11:14:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732011256 00:03:02.593 11:14:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732011256 00:03:02.593 11:14:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732011256 00:03:02.593 11:14:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732011256 00:03:02.853 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732011256_collect-cpu-load.pm.log 00:03:02.853 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732011256_collect-vmstat.pm.log 00:03:02.853 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732011256_collect-cpu-temp.pm.log 00:03:02.853 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732011256_collect-bmc-pm.bmc.pm.log 00:03:03.791 11:14:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:03.791 11:14:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:03.791 11:14:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:03.791 11:14:17 -- common/autotest_common.sh@10 -- # set +x 00:03:03.791 11:14:17 -- spdk/autotest.sh@59 -- # create_test_list 00:03:03.791 11:14:17 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:03.791 11:14:17 -- common/autotest_common.sh@10 -- # set +x 00:03:03.791 11:14:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:03.791 11:14:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:03.791 11:14:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:03.791 11:14:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:03.791 11:14:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:03.791 11:14:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:03.791 11:14:17 -- common/autotest_common.sh@1457 -- # uname 00:03:03.791 11:14:17 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:03.791 11:14:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:03.791 11:14:17 -- common/autotest_common.sh@1477 -- # uname 00:03:03.791 11:14:17 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:03.791 11:14:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:03.791 11:14:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:03.791 lcov: LCOV version 1.15 00:03:03.791 11:14:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:21.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:21.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:30.106 11:14:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:30.106 11:14:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:30.106 11:14:42 -- common/autotest_common.sh@10 -- # set +x 00:03:30.106 11:14:42 -- spdk/autotest.sh@78 -- # rm -f 00:03:30.106 11:14:42 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.484 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:31.484 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:31.484 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:31.484 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:31.484 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:31.484 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:31.743 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:31.743 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:31.743 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:31.743 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:31.743 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:31.743 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:31.743 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:31.743 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:31.743 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:31.743 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:31.743 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:32.003 11:14:45 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:32.003 11:14:45 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:32.003 11:14:45 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:32.003 11:14:45 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:32.003 11:14:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:32.003 11:14:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:32.003 11:14:45 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:32.003 11:14:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:32.003 11:14:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:32.003 11:14:45 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:32.003 11:14:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:32.003 11:14:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:32.003 11:14:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:32.003 11:14:45 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:32.003 11:14:45 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:32.003 No valid GPT data, bailing 00:03:32.003 11:14:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:32.003 11:14:45 -- scripts/common.sh@394 -- # pt= 00:03:32.003 11:14:45 -- scripts/common.sh@395 -- # return 1 00:03:32.003 11:14:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:32.003 1+0 records in 00:03:32.003 1+0 records out 00:03:32.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00162747 s, 644 MB/s 00:03:32.003 11:14:45 -- spdk/autotest.sh@105 -- # sync 00:03:32.003 11:14:45 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:32.003 11:14:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:32.003 11:14:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:38.577 11:14:51 -- spdk/autotest.sh@111 -- # uname -s 00:03:38.577 11:14:51 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:38.577 11:14:51 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:38.577 11:14:51 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:40.485 Hugepages 00:03:40.485 node hugesize free / total 00:03:40.485 node0 1048576kB 0 / 0 00:03:40.485 node0 2048kB 0 / 0 00:03:40.485 node1 1048576kB 0 / 0 00:03:40.485 node1 2048kB 0 / 0 00:03:40.485 00:03:40.485 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:40.485 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:40.485 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:40.485 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:40.485 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:40.485 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:40.485 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:40.485 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:40.485 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:40.485 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:40.485 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:40.485 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:40.485 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:40.485 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:40.485 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:40.485 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:40.485 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:40.485 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:40.485 11:14:54 -- spdk/autotest.sh@117 -- # uname -s 00:03:40.485 11:14:54 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:40.485 11:14:54 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:40.485 11:14:54 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.777 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:43.777 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:44.352 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:44.353 11:14:57 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:45.290 11:14:58 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:45.290 11:14:58 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:45.290 11:14:58 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:45.290 11:14:58 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:45.290 11:14:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:45.290 11:14:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:45.290 11:14:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:45.290 11:14:58 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:45.290 11:14:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:45.290 11:14:59 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:45.290 11:14:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:45.290 11:14:59 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.582 Waiting for block devices as requested 00:03:48.582 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:48.582 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:48.582 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:48.582 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:48.582 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:48.582 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:48.582 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:48.841 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:48.841 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:48.841 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:49.100 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:49.100 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:49.100 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:49.100 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:49.360 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:49.360 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:49.360 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:49.620 11:15:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:49.620 11:15:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:49.620 11:15:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:49.620 11:15:03 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:49.620 11:15:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:49.620 11:15:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:49.620 11:15:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:49.620 11:15:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:49.620 11:15:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:49.620 11:15:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:49.620 11:15:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:49.620 11:15:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:49.620 11:15:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:49.620 11:15:03 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:49.620 11:15:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:49.620 11:15:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:49.620 11:15:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:49.620 11:15:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:49.620 11:15:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:49.620 11:15:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:49.620 11:15:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:49.620 11:15:03 -- common/autotest_common.sh@1543 -- # continue 00:03:49.620 11:15:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:49.620 11:15:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.620 11:15:03 -- common/autotest_common.sh@10 -- # set +x 00:03:49.620 11:15:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:49.620 11:15:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.620 11:15:03 -- common/autotest_common.sh@10 -- # set +x 00:03:49.620 11:15:03 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.914 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.914 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:53.484 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.484 11:15:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:53.484 11:15:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.484 11:15:07 -- common/autotest_common.sh@10 -- # set +x 00:03:53.484 11:15:07 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:53.484 11:15:07 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:53.484 11:15:07 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:53.484 11:15:07 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:53.484 11:15:07 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:53.484 11:15:07 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:53.484 11:15:07 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:53.484 11:15:07 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:53.484 11:15:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:53.484 11:15:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:53.484 11:15:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.484 11:15:07 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:53.484 11:15:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:53.743 11:15:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:53.743 11:15:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:53.743 11:15:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:53.743 11:15:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:53.743 11:15:07 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:53.743 11:15:07 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:53.743 11:15:07 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:53.743 11:15:07 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:53.743 11:15:07 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:53.743 11:15:07 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:53.743 11:15:07 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2063443 00:03:53.743 11:15:07 -- common/autotest_common.sh@1585 -- # waitforlisten 2063443 00:03:53.743 11:15:07 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.743 11:15:07 -- common/autotest_common.sh@835 -- # '[' -z 2063443 ']' 00:03:53.743 11:15:07 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.743 11:15:07 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:53.743 11:15:07 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.743 11:15:07 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:53.743 11:15:07 -- common/autotest_common.sh@10 -- # set +x 00:03:53.743 [2024-11-19 11:15:07.359145] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:03:53.743 [2024-11-19 11:15:07.359194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2063443 ] 00:03:53.743 [2024-11-19 11:15:07.435271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.743 [2024-11-19 11:15:07.475511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.003 11:15:07 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:54.003 11:15:07 -- common/autotest_common.sh@868 -- # return 0 00:03:54.003 11:15:07 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:54.003 11:15:07 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:54.003 11:15:07 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:57.298 nvme0n1 00:03:57.298 11:15:10 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:57.298 [2024-11-19 11:15:10.879263] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:57.298 request: 00:03:57.298 { 00:03:57.298 "nvme_ctrlr_name": "nvme0", 00:03:57.298 "password": "test", 00:03:57.298 "method": "bdev_nvme_opal_revert", 00:03:57.298 "req_id": 1 00:03:57.298 } 00:03:57.298 Got JSON-RPC error response 00:03:57.298 response: 00:03:57.298 { 00:03:57.298 "code": -32602, 00:03:57.298 "message": "Invalid parameters" 00:03:57.298 } 00:03:57.298 11:15:10 -- common/autotest_common.sh@1591 -- # true 00:03:57.298 11:15:10 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:57.298 11:15:10 -- common/autotest_common.sh@1595 -- # killprocess 2063443 00:03:57.298 11:15:10 -- common/autotest_common.sh@954 -- # '[' -z 2063443 ']' 00:03:57.298 11:15:10 -- common/autotest_common.sh@958 -- # kill -0 2063443 00:03:57.298 11:15:10 -- common/autotest_common.sh@959 -- # uname 00:03:57.298 11:15:10 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.298 11:15:10 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2063443 00:03:57.298 11:15:10 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.298 11:15:10 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.298 11:15:10 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2063443' 00:03:57.298 killing process with pid 2063443 00:03:57.298 11:15:10 -- common/autotest_common.sh@973 -- # kill 2063443 00:03:57.298 11:15:10 -- common/autotest_common.sh@978 -- # wait 2063443 00:03:59.204 11:15:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:59.204 11:15:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:59.204 11:15:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:59.204 11:15:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:59.204 11:15:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:59.204 11:15:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.204 11:15:12 -- common/autotest_common.sh@10 -- # set +x 00:03:59.204 11:15:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:59.204 11:15:12 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:59.204 11:15:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.204 11:15:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.204 11:15:12 -- common/autotest_common.sh@10 -- # set +x 00:03:59.204 ************************************ 00:03:59.204 START TEST env 00:03:59.204 ************************************ 00:03:59.204 11:15:12 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:59.204 * Looking for test storage... 00:03:59.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:59.204 11:15:12 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:59.204 11:15:12 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:59.204 11:15:12 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:59.204 11:15:12 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:59.204 11:15:12 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.204 11:15:12 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.204 11:15:12 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.205 11:15:12 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.205 11:15:12 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.205 11:15:12 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.205 11:15:12 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.205 11:15:12 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.205 11:15:12 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.205 11:15:12 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.205 11:15:12 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.205 11:15:12 env -- scripts/common.sh@344 -- # case "$op" in 00:03:59.205 11:15:12 env -- scripts/common.sh@345 -- # : 1 00:03:59.205 11:15:12 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.205 11:15:12 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.205 11:15:12 env -- scripts/common.sh@365 -- # decimal 1 00:03:59.205 11:15:12 env -- scripts/common.sh@353 -- # local d=1 00:03:59.205 11:15:12 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.205 11:15:12 env -- scripts/common.sh@355 -- # echo 1 00:03:59.205 11:15:12 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.205 11:15:12 env -- scripts/common.sh@366 -- # decimal 2 00:03:59.205 11:15:12 env -- scripts/common.sh@353 -- # local d=2 00:03:59.205 11:15:12 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.205 11:15:12 env -- scripts/common.sh@355 -- # echo 2 00:03:59.205 11:15:12 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.205 11:15:12 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.205 11:15:12 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.205 11:15:12 env -- scripts/common.sh@368 -- # return 0 00:03:59.205 11:15:12 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.205 11:15:12 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.205 --rc genhtml_branch_coverage=1 00:03:59.205 --rc genhtml_function_coverage=1 00:03:59.205 --rc genhtml_legend=1 00:03:59.205 --rc geninfo_all_blocks=1 00:03:59.205 --rc geninfo_unexecuted_blocks=1 00:03:59.205 00:03:59.205 ' 00:03:59.205 11:15:12 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.205 --rc genhtml_branch_coverage=1 00:03:59.205 --rc genhtml_function_coverage=1 00:03:59.205 --rc genhtml_legend=1 00:03:59.205 --rc geninfo_all_blocks=1 00:03:59.205 --rc geninfo_unexecuted_blocks=1 00:03:59.205 00:03:59.205 ' 00:03:59.205 11:15:12 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.205 --rc genhtml_branch_coverage=1 00:03:59.205 --rc genhtml_function_coverage=1 00:03:59.205 --rc genhtml_legend=1 00:03:59.205 --rc geninfo_all_blocks=1 00:03:59.205 --rc geninfo_unexecuted_blocks=1 00:03:59.205 00:03:59.205 ' 00:03:59.205 11:15:12 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.205 --rc genhtml_branch_coverage=1 00:03:59.205 --rc genhtml_function_coverage=1 00:03:59.205 --rc genhtml_legend=1 00:03:59.205 --rc geninfo_all_blocks=1 00:03:59.205 --rc geninfo_unexecuted_blocks=1 00:03:59.205 00:03:59.205 ' 00:03:59.205 11:15:12 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:59.205 11:15:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.205 11:15:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.205 11:15:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.205 ************************************ 00:03:59.205 START TEST env_memory 00:03:59.205 ************************************ 00:03:59.205 11:15:12 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:59.205 00:03:59.205 00:03:59.205 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.205 http://cunit.sourceforge.net/ 00:03:59.205 00:03:59.205 00:03:59.205 Suite: memory 00:03:59.205 Test: alloc and free memory map ...[2024-11-19 11:15:12.831442] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:59.205 passed 00:03:59.205 Test: mem map translation ...[2024-11-19 11:15:12.849869] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:59.205 [2024-11-19 11:15:12.849881] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:59.205 [2024-11-19 11:15:12.849915] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:59.205 [2024-11-19 11:15:12.849922] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:59.205 passed 00:03:59.205 Test: mem map registration ...[2024-11-19 11:15:12.888480] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:59.205 [2024-11-19 11:15:12.888494] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:59.205 passed 00:03:59.205 Test: mem map adjacent registrations ...passed 00:03:59.205 00:03:59.205 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.205 suites 1 1 n/a 0 0 00:03:59.205 tests 4 4 4 0 0 00:03:59.205 asserts 152 152 152 0 n/a 00:03:59.205 00:03:59.205 Elapsed time = 0.144 seconds 00:03:59.205 00:03:59.205 real 0m0.158s 00:03:59.205 user 0m0.146s 00:03:59.205 sys 0m0.011s 00:03:59.205 11:15:12 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.205 11:15:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:59.205 ************************************ 00:03:59.205 END TEST env_memory 00:03:59.205 ************************************ 00:03:59.205 11:15:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:59.205 11:15:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.205 11:15:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.205 11:15:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.466 ************************************ 00:03:59.466 START TEST env_vtophys 00:03:59.466 ************************************ 00:03:59.466 11:15:13 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:59.466 EAL: lib.eal log level changed from notice to debug 00:03:59.466 EAL: Detected lcore 0 as core 0 on socket 0 00:03:59.466 EAL: Detected lcore 1 as core 1 on socket 0 00:03:59.466 EAL: Detected lcore 2 as core 2 on socket 0 00:03:59.466 EAL: Detected lcore 3 as core 3 on socket 0 00:03:59.466 EAL: Detected lcore 4 as core 4 on socket 0 00:03:59.466 EAL: Detected lcore 5 as core 5 on socket 0 00:03:59.466 EAL: Detected lcore 6 as core 6 on socket 0 00:03:59.466 EAL: Detected lcore 7 as core 8 on socket 0 00:03:59.466 EAL: Detected lcore 8 as core 9 on socket 0 00:03:59.466 EAL: Detected lcore 9 as core 10 on socket 0 00:03:59.466 EAL: Detected lcore 10 as core 11 on socket 0 00:03:59.466 EAL: Detected lcore 11 as core 12 on socket 0 00:03:59.466 EAL: Detected lcore 12 as core 13 on socket 0 00:03:59.466 EAL: Detected lcore 13 as core 16 on socket 0 00:03:59.466 EAL: Detected lcore 14 as core 17 on socket 0 00:03:59.466 EAL: Detected lcore 15 as core 18 on socket 0 00:03:59.466 EAL: Detected lcore 16 as core 19 on socket 0 00:03:59.466 EAL: Detected lcore 17 as core 20 on socket 0 00:03:59.466 EAL: Detected lcore 18 as core 21 on socket 0 00:03:59.466 EAL: Detected lcore 19 as core 25 on socket 0 00:03:59.466 EAL: Detected lcore 20 as core 26 on socket 0 00:03:59.466 EAL: Detected lcore 21 as core 27 on socket 0 00:03:59.466 EAL: Detected lcore 22 as core 28 on socket 0 00:03:59.466 EAL: Detected lcore 23 as core 29 on socket 0 00:03:59.466 EAL: Detected lcore 24 as core 0 on socket 1 00:03:59.466 EAL: Detected lcore 25 as core 1 on socket 1 00:03:59.466 EAL: Detected lcore 26 as core 2 on socket 1 00:03:59.466 EAL: Detected lcore 27 as core 3 on socket 1 00:03:59.466 EAL: Detected lcore 28 as core 4 on socket 1 00:03:59.466 EAL: Detected lcore 29 as core 5 on socket 1 00:03:59.466 EAL: Detected lcore 30 as core 6 on socket 1 00:03:59.466 EAL: Detected lcore 31 as core 9 on socket 1 00:03:59.466 EAL: Detected lcore 32 as core 10 on socket 1 00:03:59.466 EAL: Detected lcore 33 as core 11 on socket 1 00:03:59.466 EAL: Detected lcore 34 as core 12 on socket 1 00:03:59.466 EAL: Detected lcore 35 as core 13 on socket 1 00:03:59.466 EAL: Detected lcore 36 as core 16 on socket 1 00:03:59.466 EAL: Detected lcore 37 as core 17 on socket 1 00:03:59.466 EAL: Detected lcore 38 as core 18 on socket 1 00:03:59.466 EAL: Detected lcore 39 as core 19 on socket 1 00:03:59.466 EAL: Detected lcore 40 as core 20 on socket 1 00:03:59.466 EAL: Detected lcore 41 as core 21 on socket 1 00:03:59.466 EAL: Detected lcore 42 as core 24 on socket 1 00:03:59.466 EAL: Detected lcore 43 as core 25 on socket 1 00:03:59.466 EAL: Detected lcore 44 as core 26 on socket 1 00:03:59.466 EAL: Detected lcore 45 as core 27 on socket 1 00:03:59.466 EAL: Detected lcore 46 as core 28 on socket 1 00:03:59.466 EAL: Detected lcore 47 as core 29 on socket 1 00:03:59.466 EAL: Detected lcore 48 as core 0 on socket 0 00:03:59.466 EAL: Detected lcore 49 as core 1 on socket 0 00:03:59.466 EAL: Detected lcore 50 as core 2 on socket 0 00:03:59.466 EAL: Detected lcore 51 as core 3 on socket 0 00:03:59.466 EAL: Detected lcore 52 as core 4 on socket 0 00:03:59.466 EAL: Detected lcore 53 as core 5 on socket 0 00:03:59.466 EAL: Detected lcore 54 as core 6 on socket 0 00:03:59.466 EAL: Detected lcore 55 as core 8 on socket 0 00:03:59.466 EAL: Detected lcore 56 as core 9 on socket 0 00:03:59.466 EAL: Detected lcore 57 as core 10 on socket 0 00:03:59.466 EAL: Detected lcore 58 as core 11 on socket 0 00:03:59.466 EAL: Detected lcore 59 as core 12 on socket 0 00:03:59.466 EAL: Detected lcore 60 as core 13 on socket 0 00:03:59.466 EAL: Detected lcore 61 as core 16 on socket 0 00:03:59.466 EAL: Detected lcore 62 as core 17 on socket 0 00:03:59.466 EAL: Detected lcore 63 as core 18 on socket 0 00:03:59.466 EAL: Detected lcore 64 as core 19 on socket 0 00:03:59.466 EAL: Detected lcore 65 as core 20 on socket 0 00:03:59.466 EAL: Detected lcore 66 as core 21 on socket 0 00:03:59.466 EAL: Detected lcore 67 as core 25 on socket 0 00:03:59.466 EAL: Detected lcore 68 as core 26 on socket 0 00:03:59.466 EAL: Detected lcore 69 as core 27 on socket 0 00:03:59.466 EAL: Detected lcore 70 as core 28 on socket 0 00:03:59.466 EAL: Detected lcore 71 as core 29 on socket 0 00:03:59.466 EAL: Detected lcore 72 as core 0 on socket 1 00:03:59.466 EAL: Detected lcore 73 as core 1 on socket 1 00:03:59.466 EAL: Detected lcore 74 as core 2 on socket 1 00:03:59.466 EAL: Detected lcore 75 as core 3 on socket 1 00:03:59.466 EAL: Detected lcore 76 as core 4 on socket 1 00:03:59.466 EAL: Detected lcore 77 as core 5 on socket 1 00:03:59.466 EAL: Detected lcore 78 as core 6 on socket 1 00:03:59.466 EAL: Detected lcore 79 as core 9 on socket 1 00:03:59.467 EAL: Detected lcore 80 as core 10 on socket 1 00:03:59.467 EAL: Detected lcore 81 as core 11 on socket 1 00:03:59.467 EAL: Detected lcore 82 as core 12 on socket 1 00:03:59.467 EAL: Detected lcore 83 as core 13 on socket 1 00:03:59.467 EAL: Detected lcore 84 as core 16 on socket 1 00:03:59.467 EAL: Detected lcore 85 as core 17 on socket 1 00:03:59.467 EAL: Detected lcore 86 as core 18 on socket 1 00:03:59.467 EAL: Detected lcore 87 as core 19 on socket 1 00:03:59.467 EAL: Detected lcore 88 as core 20 on socket 1 00:03:59.467 EAL: Detected lcore 89 as core 21 on socket 1 00:03:59.467 EAL: Detected lcore 90 as core 24 on socket 1 00:03:59.467 EAL: Detected lcore 91 as core 25 on socket 1 00:03:59.467 EAL: Detected lcore 92 as core 26 on socket 1 00:03:59.467 EAL: Detected lcore 93 as core 27 on socket 1 00:03:59.467 EAL: Detected lcore 94 as core 28 on socket 1 00:03:59.467 EAL: Detected lcore 95 as core 29 on socket 1 00:03:59.467 EAL: Maximum logical cores by configuration: 128 00:03:59.467 EAL: Detected CPU lcores: 96 00:03:59.467 EAL: Detected NUMA nodes: 2 00:03:59.467 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:59.467 EAL: Detected shared linkage of DPDK 00:03:59.467 EAL: No shared files mode enabled, IPC will be disabled 00:03:59.467 EAL: Bus pci wants IOVA as 'DC' 00:03:59.467 EAL: Buses did not request a specific IOVA mode. 00:03:59.467 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:59.467 EAL: Selected IOVA mode 'VA' 00:03:59.467 EAL: Probing VFIO support... 00:03:59.467 EAL: IOMMU type 1 (Type 1) is supported 00:03:59.467 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:59.467 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:59.467 EAL: VFIO support initialized 00:03:59.467 EAL: Ask a virtual area of 0x2e000 bytes 00:03:59.467 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:59.467 EAL: Setting up physically contiguous memory... 00:03:59.467 EAL: Setting maximum number of open files to 524288 00:03:59.467 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:59.467 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:59.467 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:59.467 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.467 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:59.467 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.467 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.467 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:59.467 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:59.467 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.467 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:59.467 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.467 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.467 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:59.467 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:59.467 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.467 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:59.467 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.467 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.467 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:59.467 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:59.467 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.467 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:59.467 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.467 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.467 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:59.467 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:59.467 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:59.467 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.467 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:59.467 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.467 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.467 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:59.467 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:59.467 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.467 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:59.467 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.467 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.467 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:59.467 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:59.467 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.467 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:59.467 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.467 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.467 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:59.467 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:59.467 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.467 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:59.467 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.467 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.467 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:59.467 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:59.467 EAL: Hugepages will be freed exactly as allocated. 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: TSC frequency is ~2300000 KHz 00:03:59.467 EAL: Main lcore 0 is ready (tid=7fdf0bd2aa00;cpuset=[0]) 00:03:59.467 EAL: Trying to obtain current memory policy. 00:03:59.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.467 EAL: Restoring previous memory policy: 0 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was expanded by 2MB 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:59.467 EAL: Mem event callback 'spdk:(nil)' registered 00:03:59.467 00:03:59.467 00:03:59.467 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.467 http://cunit.sourceforge.net/ 00:03:59.467 00:03:59.467 00:03:59.467 Suite: components_suite 00:03:59.467 Test: vtophys_malloc_test ...passed 00:03:59.467 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:59.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.467 EAL: Restoring previous memory policy: 4 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was expanded by 4MB 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was shrunk by 4MB 00:03:59.467 EAL: Trying to obtain current memory policy. 00:03:59.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.467 EAL: Restoring previous memory policy: 4 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was expanded by 6MB 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was shrunk by 6MB 00:03:59.467 EAL: Trying to obtain current memory policy. 00:03:59.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.467 EAL: Restoring previous memory policy: 4 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was expanded by 10MB 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was shrunk by 10MB 00:03:59.467 EAL: Trying to obtain current memory policy. 00:03:59.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.467 EAL: Restoring previous memory policy: 4 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was expanded by 18MB 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was shrunk by 18MB 00:03:59.467 EAL: Trying to obtain current memory policy. 00:03:59.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.467 EAL: Restoring previous memory policy: 4 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was expanded by 34MB 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was shrunk by 34MB 00:03:59.467 EAL: Trying to obtain current memory policy. 00:03:59.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.467 EAL: Restoring previous memory policy: 4 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was expanded by 66MB 00:03:59.467 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.467 EAL: request: mp_malloc_sync 00:03:59.467 EAL: No shared files mode enabled, IPC is disabled 00:03:59.467 EAL: Heap on socket 0 was shrunk by 66MB 00:03:59.467 EAL: Trying to obtain current memory policy. 00:03:59.468 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.468 EAL: Restoring previous memory policy: 4 00:03:59.468 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.468 EAL: request: mp_malloc_sync 00:03:59.468 EAL: No shared files mode enabled, IPC is disabled 00:03:59.468 EAL: Heap on socket 0 was expanded by 130MB 00:03:59.468 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.468 EAL: request: mp_malloc_sync 00:03:59.468 EAL: No shared files mode enabled, IPC is disabled 00:03:59.468 EAL: Heap on socket 0 was shrunk by 130MB 00:03:59.468 EAL: Trying to obtain current memory policy. 00:03:59.468 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.727 EAL: Restoring previous memory policy: 4 00:03:59.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.727 EAL: request: mp_malloc_sync 00:03:59.727 EAL: No shared files mode enabled, IPC is disabled 00:03:59.727 EAL: Heap on socket 0 was expanded by 258MB 00:03:59.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.727 EAL: request: mp_malloc_sync 00:03:59.727 EAL: No shared files mode enabled, IPC is disabled 00:03:59.727 EAL: Heap on socket 0 was shrunk by 258MB 00:03:59.727 EAL: Trying to obtain current memory policy. 00:03:59.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.727 EAL: Restoring previous memory policy: 4 00:03:59.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.727 EAL: request: mp_malloc_sync 00:03:59.727 EAL: No shared files mode enabled, IPC is disabled 00:03:59.727 EAL: Heap on socket 0 was expanded by 514MB 00:03:59.986 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.986 EAL: request: mp_malloc_sync 00:03:59.986 EAL: No shared files mode enabled, IPC is disabled 00:03:59.986 EAL: Heap on socket 0 was shrunk by 514MB 00:03:59.986 EAL: Trying to obtain current memory policy. 00:03:59.986 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.245 EAL: Restoring previous memory policy: 4 00:04:00.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.245 EAL: request: mp_malloc_sync 00:04:00.245 EAL: No shared files mode enabled, IPC is disabled 00:04:00.245 EAL: Heap on socket 0 was expanded by 1026MB 00:04:00.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.505 EAL: request: mp_malloc_sync 00:04:00.505 EAL: No shared files mode enabled, IPC is disabled 00:04:00.505 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:00.505 passed 00:04:00.505 00:04:00.505 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.505 suites 1 1 n/a 0 0 00:04:00.505 tests 2 2 2 0 0 00:04:00.505 asserts 497 497 497 0 n/a 00:04:00.505 00:04:00.505 Elapsed time = 0.976 seconds 00:04:00.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.505 EAL: request: mp_malloc_sync 00:04:00.505 EAL: No shared files mode enabled, IPC is disabled 00:04:00.505 EAL: Heap on socket 0 was shrunk by 2MB 00:04:00.505 EAL: No shared files mode enabled, IPC is disabled 00:04:00.505 EAL: No shared files mode enabled, IPC is disabled 00:04:00.505 EAL: No shared files mode enabled, IPC is disabled 00:04:00.505 00:04:00.505 real 0m1.098s 00:04:00.505 user 0m0.645s 00:04:00.505 sys 0m0.424s 00:04:00.505 11:15:14 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.505 11:15:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:00.505 ************************************ 00:04:00.505 END TEST env_vtophys 00:04:00.505 ************************************ 00:04:00.505 11:15:14 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:00.505 11:15:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.505 11:15:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.505 11:15:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.505 ************************************ 00:04:00.505 START TEST env_pci 00:04:00.505 ************************************ 00:04:00.505 11:15:14 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:00.505 00:04:00.505 00:04:00.505 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.505 http://cunit.sourceforge.net/ 00:04:00.505 00:04:00.505 00:04:00.505 Suite: pci 00:04:00.505 Test: pci_hook ...[2024-11-19 11:15:14.198428] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2064672 has claimed it 00:04:00.505 EAL: Cannot find device (10000:00:01.0) 00:04:00.505 EAL: Failed to attach device on primary process 00:04:00.505 passed 00:04:00.505 00:04:00.505 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.505 suites 1 1 n/a 0 0 00:04:00.505 tests 1 1 1 0 0 00:04:00.505 asserts 25 25 25 0 n/a 00:04:00.505 00:04:00.505 Elapsed time = 0.026 seconds 00:04:00.505 00:04:00.505 real 0m0.046s 00:04:00.505 user 0m0.014s 00:04:00.505 sys 0m0.031s 00:04:00.505 11:15:14 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.505 11:15:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:00.505 ************************************ 00:04:00.505 END TEST env_pci 00:04:00.505 ************************************ 00:04:00.505 11:15:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:00.505 11:15:14 env -- env/env.sh@15 -- # uname 00:04:00.505 11:15:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:00.505 11:15:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:00.505 11:15:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:00.505 11:15:14 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:00.505 11:15:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.505 11:15:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.766 ************************************ 00:04:00.766 START TEST env_dpdk_post_init 00:04:00.766 ************************************ 00:04:00.766 11:15:14 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:00.766 EAL: Detected CPU lcores: 96 00:04:00.766 EAL: Detected NUMA nodes: 2 00:04:00.766 EAL: Detected shared linkage of DPDK 00:04:00.766 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.766 EAL: Selected IOVA mode 'VA' 00:04:00.766 EAL: VFIO support initialized 00:04:00.766 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.766 EAL: Using IOMMU type 1 (Type 1) 00:04:00.766 EAL: Ignore mapping IO port bar(1) 00:04:00.766 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:00.766 EAL: Ignore mapping IO port bar(1) 00:04:00.766 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:00.766 EAL: Ignore mapping IO port bar(1) 00:04:00.766 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:00.766 EAL: Ignore mapping IO port bar(1) 00:04:00.766 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:00.766 EAL: Ignore mapping IO port bar(1) 00:04:00.766 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:00.766 EAL: Ignore mapping IO port bar(1) 00:04:00.766 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:00.766 EAL: Ignore mapping IO port bar(1) 00:04:00.766 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:00.766 EAL: Ignore mapping IO port bar(1) 00:04:00.766 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:01.705 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:01.705 EAL: Ignore mapping IO port bar(1) 00:04:01.705 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:01.705 EAL: Ignore mapping IO port bar(1) 00:04:01.705 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:01.705 EAL: Ignore mapping IO port bar(1) 00:04:01.705 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:01.705 EAL: Ignore mapping IO port bar(1) 00:04:01.705 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:01.705 EAL: Ignore mapping IO port bar(1) 00:04:01.705 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:01.705 EAL: Ignore mapping IO port bar(1) 00:04:01.705 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:01.705 EAL: Ignore mapping IO port bar(1) 00:04:01.705 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:01.705 EAL: Ignore mapping IO port bar(1) 00:04:01.705 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:04.994 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:04.994 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:04.994 Starting DPDK initialization... 00:04:04.994 Starting SPDK post initialization... 00:04:04.994 SPDK NVMe probe 00:04:04.994 Attaching to 0000:5e:00.0 00:04:04.994 Attached to 0000:5e:00.0 00:04:04.994 Cleaning up... 00:04:04.994 00:04:04.994 real 0m4.333s 00:04:04.994 user 0m2.931s 00:04:04.994 sys 0m0.469s 00:04:04.994 11:15:18 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.994 11:15:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.994 ************************************ 00:04:04.994 END TEST env_dpdk_post_init 00:04:04.994 ************************************ 00:04:04.994 11:15:18 env -- env/env.sh@26 -- # uname 00:04:04.994 11:15:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:04.994 11:15:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.994 11:15:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.994 11:15:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.994 11:15:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.994 ************************************ 00:04:04.994 START TEST env_mem_callbacks 00:04:04.994 ************************************ 00:04:04.994 11:15:18 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.994 EAL: Detected CPU lcores: 96 00:04:04.994 EAL: Detected NUMA nodes: 2 00:04:04.994 EAL: Detected shared linkage of DPDK 00:04:04.994 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.994 EAL: Selected IOVA mode 'VA' 00:04:04.994 EAL: VFIO support initialized 00:04:04.994 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.994 00:04:04.994 00:04:04.994 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.994 http://cunit.sourceforge.net/ 00:04:04.994 00:04:04.994 00:04:04.994 Suite: memory 00:04:04.994 Test: test ... 00:04:04.994 register 0x200000200000 2097152 00:04:04.994 malloc 3145728 00:04:04.994 register 0x200000400000 4194304 00:04:04.994 buf 0x200000500000 len 3145728 PASSED 00:04:04.994 malloc 64 00:04:04.994 buf 0x2000004fff40 len 64 PASSED 00:04:04.994 malloc 4194304 00:04:04.994 register 0x200000800000 6291456 00:04:04.994 buf 0x200000a00000 len 4194304 PASSED 00:04:04.994 free 0x200000500000 3145728 00:04:04.994 free 0x2000004fff40 64 00:04:04.994 unregister 0x200000400000 4194304 PASSED 00:04:04.994 free 0x200000a00000 4194304 00:04:04.994 unregister 0x200000800000 6291456 PASSED 00:04:04.994 malloc 8388608 00:04:04.994 register 0x200000400000 10485760 00:04:04.994 buf 0x200000600000 len 8388608 PASSED 00:04:04.994 free 0x200000600000 8388608 00:04:04.994 unregister 0x200000400000 10485760 PASSED 00:04:04.994 passed 00:04:04.994 00:04:04.994 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.994 suites 1 1 n/a 0 0 00:04:04.994 tests 1 1 1 0 0 00:04:04.994 asserts 15 15 15 0 n/a 00:04:04.994 00:04:04.994 Elapsed time = 0.008 seconds 00:04:04.994 00:04:04.994 real 0m0.058s 00:04:04.994 user 0m0.022s 00:04:04.995 sys 0m0.036s 00:04:04.995 11:15:18 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.995 11:15:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:04.995 ************************************ 00:04:04.995 END TEST env_mem_callbacks 00:04:04.995 ************************************ 00:04:05.253 00:04:05.253 real 0m6.225s 00:04:05.253 user 0m3.978s 00:04:05.253 sys 0m1.318s 00:04:05.253 11:15:18 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.253 11:15:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.253 ************************************ 00:04:05.253 END TEST env 00:04:05.253 ************************************ 00:04:05.253 11:15:18 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:05.253 11:15:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.253 11:15:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.253 11:15:18 -- common/autotest_common.sh@10 -- # set +x 00:04:05.253 ************************************ 00:04:05.253 START TEST rpc 00:04:05.253 ************************************ 00:04:05.253 11:15:18 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:05.253 * Looking for test storage... 00:04:05.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:05.253 11:15:18 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:05.253 11:15:18 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:05.253 11:15:18 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:05.511 11:15:19 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:05.511 11:15:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.512 11:15:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.512 11:15:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.512 11:15:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.512 11:15:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.512 11:15:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.512 11:15:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.512 11:15:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.512 11:15:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.512 11:15:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.512 11:15:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.512 11:15:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:05.512 11:15:19 rpc -- scripts/common.sh@345 -- # : 1 00:04:05.512 11:15:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.512 11:15:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.512 11:15:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:05.512 11:15:19 rpc -- scripts/common.sh@353 -- # local d=1 00:04:05.512 11:15:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.512 11:15:19 rpc -- scripts/common.sh@355 -- # echo 1 00:04:05.512 11:15:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.512 11:15:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:05.512 11:15:19 rpc -- scripts/common.sh@353 -- # local d=2 00:04:05.512 11:15:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.512 11:15:19 rpc -- scripts/common.sh@355 -- # echo 2 00:04:05.512 11:15:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.512 11:15:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.512 11:15:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.512 11:15:19 rpc -- scripts/common.sh@368 -- # return 0 00:04:05.512 11:15:19 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.512 11:15:19 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:05.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.512 --rc genhtml_branch_coverage=1 00:04:05.512 --rc genhtml_function_coverage=1 00:04:05.512 --rc genhtml_legend=1 00:04:05.512 --rc geninfo_all_blocks=1 00:04:05.512 --rc geninfo_unexecuted_blocks=1 00:04:05.512 00:04:05.512 ' 00:04:05.512 11:15:19 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:05.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.512 --rc genhtml_branch_coverage=1 00:04:05.512 --rc genhtml_function_coverage=1 00:04:05.512 --rc genhtml_legend=1 00:04:05.512 --rc geninfo_all_blocks=1 00:04:05.512 --rc geninfo_unexecuted_blocks=1 00:04:05.512 00:04:05.512 ' 00:04:05.512 11:15:19 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:05.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.512 --rc genhtml_branch_coverage=1 00:04:05.512 --rc genhtml_function_coverage=1 00:04:05.512 --rc genhtml_legend=1 00:04:05.512 --rc geninfo_all_blocks=1 00:04:05.512 --rc geninfo_unexecuted_blocks=1 00:04:05.512 00:04:05.512 ' 00:04:05.512 11:15:19 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:05.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.512 --rc genhtml_branch_coverage=1 00:04:05.512 --rc genhtml_function_coverage=1 00:04:05.512 --rc genhtml_legend=1 00:04:05.512 --rc geninfo_all_blocks=1 00:04:05.512 --rc geninfo_unexecuted_blocks=1 00:04:05.512 00:04:05.512 ' 00:04:05.512 11:15:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2065588 00:04:05.512 11:15:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:05.512 11:15:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.512 11:15:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2065588 00:04:05.512 11:15:19 rpc -- common/autotest_common.sh@835 -- # '[' -z 2065588 ']' 00:04:05.512 11:15:19 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.512 11:15:19 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.512 11:15:19 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.512 11:15:19 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.512 11:15:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.512 [2024-11-19 11:15:19.109255] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:05.512 [2024-11-19 11:15:19.109300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065588 ] 00:04:05.512 [2024-11-19 11:15:19.185709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.512 [2024-11-19 11:15:19.227252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:05.512 [2024-11-19 11:15:19.227282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2065588' to capture a snapshot of events at runtime. 00:04:05.512 [2024-11-19 11:15:19.227289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:05.512 [2024-11-19 11:15:19.227296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:05.512 [2024-11-19 11:15:19.227300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2065588 for offline analysis/debug. 00:04:05.512 [2024-11-19 11:15:19.227834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.485 11:15:19 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.485 11:15:19 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:06.485 11:15:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.485 11:15:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.485 11:15:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:06.485 11:15:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:06.485 11:15:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.485 11:15:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.485 11:15:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.485 ************************************ 00:04:06.485 START TEST rpc_integrity 00:04:06.485 ************************************ 00:04:06.485 11:15:19 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:06.485 11:15:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.485 11:15:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.485 11:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.485 11:15:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.485 11:15:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.485 11:15:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.485 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.485 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.485 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:06.485 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.485 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.485 { 00:04:06.485 "name": "Malloc0", 00:04:06.485 "aliases": [ 00:04:06.485 "bef522c8-c383-427b-bdb3-0bc8a114216d" 00:04:06.485 ], 00:04:06.485 "product_name": "Malloc disk", 00:04:06.485 "block_size": 512, 00:04:06.485 "num_blocks": 16384, 00:04:06.485 "uuid": "bef522c8-c383-427b-bdb3-0bc8a114216d", 00:04:06.485 "assigned_rate_limits": { 00:04:06.485 "rw_ios_per_sec": 0, 00:04:06.485 "rw_mbytes_per_sec": 0, 00:04:06.485 "r_mbytes_per_sec": 0, 00:04:06.485 "w_mbytes_per_sec": 0 00:04:06.485 }, 00:04:06.485 "claimed": false, 00:04:06.485 "zoned": false, 00:04:06.485 "supported_io_types": { 00:04:06.485 "read": true, 00:04:06.485 "write": true, 00:04:06.485 "unmap": true, 00:04:06.485 "flush": true, 00:04:06.485 "reset": true, 00:04:06.485 "nvme_admin": false, 00:04:06.485 "nvme_io": false, 00:04:06.485 "nvme_io_md": false, 00:04:06.485 "write_zeroes": true, 00:04:06.485 "zcopy": true, 00:04:06.485 "get_zone_info": false, 00:04:06.485 "zone_management": false, 00:04:06.485 "zone_append": false, 00:04:06.485 "compare": false, 00:04:06.485 "compare_and_write": false, 00:04:06.485 "abort": true, 00:04:06.485 "seek_hole": false, 00:04:06.485 "seek_data": false, 00:04:06.485 "copy": true, 00:04:06.485 "nvme_iov_md": false 00:04:06.485 }, 00:04:06.485 "memory_domains": [ 00:04:06.485 { 00:04:06.485 "dma_device_id": "system", 00:04:06.485 "dma_device_type": 1 00:04:06.485 }, 00:04:06.485 { 00:04:06.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.485 "dma_device_type": 2 00:04:06.485 } 00:04:06.485 ], 00:04:06.485 "driver_specific": {} 00:04:06.485 } 00:04:06.485 ]' 00:04:06.485 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.485 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.485 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.485 [2024-11-19 11:15:20.095559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:06.485 [2024-11-19 11:15:20.095592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.485 [2024-11-19 11:15:20.095604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13616e0 00:04:06.485 [2024-11-19 11:15:20.095611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.485 [2024-11-19 11:15:20.096775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.485 [2024-11-19 11:15:20.096797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.485 Passthru0 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.485 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.485 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.485 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.485 { 00:04:06.485 "name": "Malloc0", 00:04:06.485 "aliases": [ 00:04:06.485 "bef522c8-c383-427b-bdb3-0bc8a114216d" 00:04:06.485 ], 00:04:06.485 "product_name": "Malloc disk", 00:04:06.485 "block_size": 512, 00:04:06.485 "num_blocks": 16384, 00:04:06.485 "uuid": "bef522c8-c383-427b-bdb3-0bc8a114216d", 00:04:06.485 "assigned_rate_limits": { 00:04:06.485 "rw_ios_per_sec": 0, 00:04:06.485 "rw_mbytes_per_sec": 0, 00:04:06.485 "r_mbytes_per_sec": 0, 00:04:06.485 "w_mbytes_per_sec": 0 00:04:06.485 }, 00:04:06.485 "claimed": true, 00:04:06.485 "claim_type": "exclusive_write", 00:04:06.485 "zoned": false, 00:04:06.485 "supported_io_types": { 00:04:06.485 "read": true, 00:04:06.485 "write": true, 00:04:06.486 "unmap": true, 00:04:06.486 "flush": true, 00:04:06.486 "reset": true, 00:04:06.486 "nvme_admin": false, 00:04:06.486 "nvme_io": false, 00:04:06.486 "nvme_io_md": false, 00:04:06.486 "write_zeroes": true, 00:04:06.486 "zcopy": true, 00:04:06.486 "get_zone_info": false, 00:04:06.486 "zone_management": false, 00:04:06.486 "zone_append": false, 00:04:06.486 "compare": false, 00:04:06.486 "compare_and_write": false, 00:04:06.486 "abort": true, 00:04:06.486 "seek_hole": false, 00:04:06.486 "seek_data": false, 00:04:06.486 "copy": true, 00:04:06.486 "nvme_iov_md": false 00:04:06.486 }, 00:04:06.486 "memory_domains": [ 00:04:06.486 { 00:04:06.486 "dma_device_id": "system", 00:04:06.486 "dma_device_type": 1 00:04:06.486 }, 00:04:06.486 { 00:04:06.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.486 "dma_device_type": 2 00:04:06.486 } 00:04:06.486 ], 00:04:06.486 "driver_specific": {} 00:04:06.486 }, 00:04:06.486 { 00:04:06.486 "name": "Passthru0", 00:04:06.486 "aliases": [ 00:04:06.486 "31854b98-434d-5674-af4e-a6b9a93b7065" 00:04:06.486 ], 00:04:06.486 "product_name": "passthru", 00:04:06.486 "block_size": 512, 00:04:06.486 "num_blocks": 16384, 00:04:06.486 "uuid": "31854b98-434d-5674-af4e-a6b9a93b7065", 00:04:06.486 "assigned_rate_limits": { 00:04:06.486 "rw_ios_per_sec": 0, 00:04:06.486 "rw_mbytes_per_sec": 0, 00:04:06.486 "r_mbytes_per_sec": 0, 00:04:06.486 "w_mbytes_per_sec": 0 00:04:06.486 }, 00:04:06.486 "claimed": false, 00:04:06.486 "zoned": false, 00:04:06.486 "supported_io_types": { 00:04:06.486 "read": true, 00:04:06.486 "write": true, 00:04:06.486 "unmap": true, 00:04:06.486 "flush": true, 00:04:06.486 "reset": true, 00:04:06.486 "nvme_admin": false, 00:04:06.486 "nvme_io": false, 00:04:06.486 "nvme_io_md": false, 00:04:06.486 "write_zeroes": true, 00:04:06.486 "zcopy": true, 00:04:06.486 "get_zone_info": false, 00:04:06.486 "zone_management": false, 00:04:06.486 "zone_append": false, 00:04:06.486 "compare": false, 00:04:06.486 "compare_and_write": false, 00:04:06.486 "abort": true, 00:04:06.486 "seek_hole": false, 00:04:06.486 "seek_data": false, 00:04:06.486 "copy": true, 00:04:06.486 "nvme_iov_md": false 00:04:06.486 }, 00:04:06.486 "memory_domains": [ 00:04:06.486 { 00:04:06.486 "dma_device_id": "system", 00:04:06.486 "dma_device_type": 1 00:04:06.486 }, 00:04:06.486 { 00:04:06.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.486 "dma_device_type": 2 00:04:06.486 } 00:04:06.486 ], 00:04:06.486 "driver_specific": { 00:04:06.486 "passthru": { 00:04:06.486 "name": "Passthru0", 00:04:06.486 "base_bdev_name": "Malloc0" 00:04:06.486 } 00:04:06.486 } 00:04:06.486 } 00:04:06.486 ]' 00:04:06.486 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.486 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.486 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.486 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.486 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.486 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.486 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:06.486 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.486 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.486 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.486 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.486 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.486 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.486 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.486 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.486 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:06.805 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:06.805 00:04:06.805 real 0m0.282s 00:04:06.805 user 0m0.174s 00:04:06.805 sys 0m0.036s 00:04:06.805 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.805 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.805 ************************************ 00:04:06.805 END TEST rpc_integrity 00:04:06.805 ************************************ 00:04:06.805 11:15:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:06.805 11:15:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.805 11:15:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.805 11:15:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.805 ************************************ 00:04:06.805 START TEST rpc_plugins 00:04:06.805 ************************************ 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:06.805 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.805 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:06.805 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.805 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:06.805 { 00:04:06.805 "name": "Malloc1", 00:04:06.805 "aliases": [ 00:04:06.805 "6eee4b98-cad6-4456-a471-cf2c5c5be8c1" 00:04:06.805 ], 00:04:06.805 "product_name": "Malloc disk", 00:04:06.805 "block_size": 4096, 00:04:06.805 "num_blocks": 256, 00:04:06.805 "uuid": "6eee4b98-cad6-4456-a471-cf2c5c5be8c1", 00:04:06.805 "assigned_rate_limits": { 00:04:06.805 "rw_ios_per_sec": 0, 00:04:06.805 "rw_mbytes_per_sec": 0, 00:04:06.805 "r_mbytes_per_sec": 0, 00:04:06.805 "w_mbytes_per_sec": 0 00:04:06.805 }, 00:04:06.805 "claimed": false, 00:04:06.805 "zoned": false, 00:04:06.805 "supported_io_types": { 00:04:06.805 "read": true, 00:04:06.805 "write": true, 00:04:06.805 "unmap": true, 00:04:06.805 "flush": true, 00:04:06.805 "reset": true, 00:04:06.805 "nvme_admin": false, 00:04:06.805 "nvme_io": false, 00:04:06.805 "nvme_io_md": false, 00:04:06.805 "write_zeroes": true, 00:04:06.805 "zcopy": true, 00:04:06.805 "get_zone_info": false, 00:04:06.805 "zone_management": false, 00:04:06.805 "zone_append": false, 00:04:06.805 "compare": false, 00:04:06.805 "compare_and_write": false, 00:04:06.805 "abort": true, 00:04:06.805 "seek_hole": false, 00:04:06.805 "seek_data": false, 00:04:06.805 "copy": true, 00:04:06.805 "nvme_iov_md": false 00:04:06.805 }, 00:04:06.805 "memory_domains": [ 00:04:06.805 { 00:04:06.805 "dma_device_id": "system", 00:04:06.805 "dma_device_type": 1 00:04:06.805 }, 00:04:06.805 { 00:04:06.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.805 "dma_device_type": 2 00:04:06.805 } 00:04:06.805 ], 00:04:06.805 "driver_specific": {} 00:04:06.805 } 00:04:06.805 ]' 00:04:06.805 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:06.805 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:06.805 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.805 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.805 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:06.805 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:06.805 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:06.805 00:04:06.805 real 0m0.141s 00:04:06.805 user 0m0.092s 00:04:06.805 sys 0m0.013s 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.805 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.805 ************************************ 00:04:06.805 END TEST rpc_plugins 00:04:06.805 ************************************ 00:04:06.805 11:15:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:06.805 11:15:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.805 11:15:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.806 11:15:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.806 ************************************ 00:04:06.806 START TEST rpc_trace_cmd_test 00:04:06.806 ************************************ 00:04:06.806 11:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:06.806 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:06.806 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:06.806 11:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.806 11:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:06.806 11:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.806 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:06.806 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2065588", 00:04:06.806 "tpoint_group_mask": "0x8", 00:04:06.806 "iscsi_conn": { 00:04:06.806 "mask": "0x2", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "scsi": { 00:04:06.806 "mask": "0x4", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "bdev": { 00:04:06.806 "mask": "0x8", 00:04:06.806 "tpoint_mask": "0xffffffffffffffff" 00:04:06.806 }, 00:04:06.806 "nvmf_rdma": { 00:04:06.806 "mask": "0x10", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "nvmf_tcp": { 00:04:06.806 "mask": "0x20", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "ftl": { 00:04:06.806 "mask": "0x40", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "blobfs": { 00:04:06.806 "mask": "0x80", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "dsa": { 00:04:06.806 "mask": "0x200", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "thread": { 00:04:06.806 "mask": "0x400", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "nvme_pcie": { 00:04:06.806 "mask": "0x800", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "iaa": { 00:04:06.806 "mask": "0x1000", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "nvme_tcp": { 00:04:06.806 "mask": "0x2000", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "bdev_nvme": { 00:04:06.806 "mask": "0x4000", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "sock": { 00:04:06.806 "mask": "0x8000", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "blob": { 00:04:06.806 "mask": "0x10000", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "bdev_raid": { 00:04:06.806 "mask": "0x20000", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 }, 00:04:06.806 "scheduler": { 00:04:06.806 "mask": "0x40000", 00:04:06.806 "tpoint_mask": "0x0" 00:04:06.806 } 00:04:06.806 }' 00:04:06.806 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:07.100 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:07.100 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:07.100 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:07.100 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:07.100 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:07.100 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:07.100 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:07.100 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:07.100 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:07.100 00:04:07.100 real 0m0.218s 00:04:07.100 user 0m0.183s 00:04:07.100 sys 0m0.025s 00:04:07.100 11:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.100 11:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.100 ************************************ 00:04:07.100 END TEST rpc_trace_cmd_test 00:04:07.100 ************************************ 00:04:07.100 11:15:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:07.100 11:15:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:07.100 11:15:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:07.100 11:15:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.100 11:15:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.100 11:15:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.100 ************************************ 00:04:07.100 START TEST rpc_daemon_integrity 00:04:07.100 ************************************ 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.100 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:07.358 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.358 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.358 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.358 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.358 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.358 { 00:04:07.358 "name": "Malloc2", 00:04:07.358 "aliases": [ 00:04:07.358 "e4c0179a-8717-4ed9-bc64-3cfa839a4dc1" 00:04:07.358 ], 00:04:07.358 "product_name": "Malloc disk", 00:04:07.358 "block_size": 512, 00:04:07.358 "num_blocks": 16384, 00:04:07.358 "uuid": "e4c0179a-8717-4ed9-bc64-3cfa839a4dc1", 00:04:07.358 "assigned_rate_limits": { 00:04:07.358 "rw_ios_per_sec": 0, 00:04:07.358 "rw_mbytes_per_sec": 0, 00:04:07.358 "r_mbytes_per_sec": 0, 00:04:07.358 "w_mbytes_per_sec": 0 00:04:07.358 }, 00:04:07.358 "claimed": false, 00:04:07.358 "zoned": false, 00:04:07.358 "supported_io_types": { 00:04:07.358 "read": true, 00:04:07.358 "write": true, 00:04:07.358 "unmap": true, 00:04:07.358 "flush": true, 00:04:07.358 "reset": true, 00:04:07.358 "nvme_admin": false, 00:04:07.358 "nvme_io": false, 00:04:07.358 "nvme_io_md": false, 00:04:07.358 "write_zeroes": true, 00:04:07.358 "zcopy": true, 00:04:07.358 "get_zone_info": false, 00:04:07.358 "zone_management": false, 00:04:07.358 "zone_append": false, 00:04:07.358 "compare": false, 00:04:07.358 "compare_and_write": false, 00:04:07.358 "abort": true, 00:04:07.358 "seek_hole": false, 00:04:07.358 "seek_data": false, 00:04:07.358 "copy": true, 00:04:07.358 "nvme_iov_md": false 00:04:07.358 }, 00:04:07.359 "memory_domains": [ 00:04:07.359 { 00:04:07.359 "dma_device_id": "system", 00:04:07.359 "dma_device_type": 1 00:04:07.359 }, 00:04:07.359 { 00:04:07.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.359 "dma_device_type": 2 00:04:07.359 } 00:04:07.359 ], 00:04:07.359 "driver_specific": {} 00:04:07.359 } 00:04:07.359 ]' 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.359 [2024-11-19 11:15:20.941853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:07.359 [2024-11-19 11:15:20.941880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.359 [2024-11-19 11:15:20.941892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13f1b70 00:04:07.359 [2024-11-19 11:15:20.941898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.359 [2024-11-19 11:15:20.942889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.359 [2024-11-19 11:15:20.942910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.359 Passthru0 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.359 { 00:04:07.359 "name": "Malloc2", 00:04:07.359 "aliases": [ 00:04:07.359 "e4c0179a-8717-4ed9-bc64-3cfa839a4dc1" 00:04:07.359 ], 00:04:07.359 "product_name": "Malloc disk", 00:04:07.359 "block_size": 512, 00:04:07.359 "num_blocks": 16384, 00:04:07.359 "uuid": "e4c0179a-8717-4ed9-bc64-3cfa839a4dc1", 00:04:07.359 "assigned_rate_limits": { 00:04:07.359 "rw_ios_per_sec": 0, 00:04:07.359 "rw_mbytes_per_sec": 0, 00:04:07.359 "r_mbytes_per_sec": 0, 00:04:07.359 "w_mbytes_per_sec": 0 00:04:07.359 }, 00:04:07.359 "claimed": true, 00:04:07.359 "claim_type": "exclusive_write", 00:04:07.359 "zoned": false, 00:04:07.359 "supported_io_types": { 00:04:07.359 "read": true, 00:04:07.359 "write": true, 00:04:07.359 "unmap": true, 00:04:07.359 "flush": true, 00:04:07.359 "reset": true, 00:04:07.359 "nvme_admin": false, 00:04:07.359 "nvme_io": false, 00:04:07.359 "nvme_io_md": false, 00:04:07.359 "write_zeroes": true, 00:04:07.359 "zcopy": true, 00:04:07.359 "get_zone_info": false, 00:04:07.359 "zone_management": false, 00:04:07.359 "zone_append": false, 00:04:07.359 "compare": false, 00:04:07.359 "compare_and_write": false, 00:04:07.359 "abort": true, 00:04:07.359 "seek_hole": false, 00:04:07.359 "seek_data": false, 00:04:07.359 "copy": true, 00:04:07.359 "nvme_iov_md": false 00:04:07.359 }, 00:04:07.359 "memory_domains": [ 00:04:07.359 { 00:04:07.359 "dma_device_id": "system", 00:04:07.359 "dma_device_type": 1 00:04:07.359 }, 00:04:07.359 { 00:04:07.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.359 "dma_device_type": 2 00:04:07.359 } 00:04:07.359 ], 00:04:07.359 "driver_specific": {} 00:04:07.359 }, 00:04:07.359 { 00:04:07.359 "name": "Passthru0", 00:04:07.359 "aliases": [ 00:04:07.359 "abac24ba-74fd-5fe6-b2a1-b5f4b91863da" 00:04:07.359 ], 00:04:07.359 "product_name": "passthru", 00:04:07.359 "block_size": 512, 00:04:07.359 "num_blocks": 16384, 00:04:07.359 "uuid": "abac24ba-74fd-5fe6-b2a1-b5f4b91863da", 00:04:07.359 "assigned_rate_limits": { 00:04:07.359 "rw_ios_per_sec": 0, 00:04:07.359 "rw_mbytes_per_sec": 0, 00:04:07.359 "r_mbytes_per_sec": 0, 00:04:07.359 "w_mbytes_per_sec": 0 00:04:07.359 }, 00:04:07.359 "claimed": false, 00:04:07.359 "zoned": false, 00:04:07.359 "supported_io_types": { 00:04:07.359 "read": true, 00:04:07.359 "write": true, 00:04:07.359 "unmap": true, 00:04:07.359 "flush": true, 00:04:07.359 "reset": true, 00:04:07.359 "nvme_admin": false, 00:04:07.359 "nvme_io": false, 00:04:07.359 "nvme_io_md": false, 00:04:07.359 "write_zeroes": true, 00:04:07.359 "zcopy": true, 00:04:07.359 "get_zone_info": false, 00:04:07.359 "zone_management": false, 00:04:07.359 "zone_append": false, 00:04:07.359 "compare": false, 00:04:07.359 "compare_and_write": false, 00:04:07.359 "abort": true, 00:04:07.359 "seek_hole": false, 00:04:07.359 "seek_data": false, 00:04:07.359 "copy": true, 00:04:07.359 "nvme_iov_md": false 00:04:07.359 }, 00:04:07.359 "memory_domains": [ 00:04:07.359 { 00:04:07.359 "dma_device_id": "system", 00:04:07.359 "dma_device_type": 1 00:04:07.359 }, 00:04:07.359 { 00:04:07.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.359 "dma_device_type": 2 00:04:07.359 } 00:04:07.359 ], 00:04:07.359 "driver_specific": { 00:04:07.359 "passthru": { 00:04:07.359 "name": "Passthru0", 00:04:07.359 "base_bdev_name": "Malloc2" 00:04:07.359 } 00:04:07.359 } 00:04:07.359 } 00:04:07.359 ]' 00:04:07.359 11:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.359 00:04:07.359 real 0m0.268s 00:04:07.359 user 0m0.184s 00:04:07.359 sys 0m0.025s 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.359 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.359 ************************************ 00:04:07.359 END TEST rpc_daemon_integrity 00:04:07.359 ************************************ 00:04:07.359 11:15:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:07.359 11:15:21 rpc -- rpc/rpc.sh@84 -- # killprocess 2065588 00:04:07.359 11:15:21 rpc -- common/autotest_common.sh@954 -- # '[' -z 2065588 ']' 00:04:07.359 11:15:21 rpc -- common/autotest_common.sh@958 -- # kill -0 2065588 00:04:07.359 11:15:21 rpc -- common/autotest_common.sh@959 -- # uname 00:04:07.359 11:15:21 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.359 11:15:21 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2065588 00:04:07.618 11:15:21 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.618 11:15:21 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.618 11:15:21 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2065588' 00:04:07.618 killing process with pid 2065588 00:04:07.618 11:15:21 rpc -- common/autotest_common.sh@973 -- # kill 2065588 00:04:07.618 11:15:21 rpc -- common/autotest_common.sh@978 -- # wait 2065588 00:04:07.877 00:04:07.877 real 0m2.587s 00:04:07.877 user 0m3.325s 00:04:07.877 sys 0m0.697s 00:04:07.877 11:15:21 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.877 11:15:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.877 ************************************ 00:04:07.877 END TEST rpc 00:04:07.877 ************************************ 00:04:07.877 11:15:21 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:07.877 11:15:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.877 11:15:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.877 11:15:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.877 ************************************ 00:04:07.877 START TEST skip_rpc 00:04:07.877 ************************************ 00:04:07.877 11:15:21 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:07.877 * Looking for test storage... 00:04:07.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:07.877 11:15:21 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.877 11:15:21 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.877 11:15:21 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:08.137 11:15:21 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.137 11:15:21 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:08.137 11:15:21 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.137 11:15:21 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:08.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.137 --rc genhtml_branch_coverage=1 00:04:08.137 --rc genhtml_function_coverage=1 00:04:08.137 --rc genhtml_legend=1 00:04:08.137 --rc geninfo_all_blocks=1 00:04:08.137 --rc geninfo_unexecuted_blocks=1 00:04:08.137 00:04:08.137 ' 00:04:08.137 11:15:21 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:08.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.137 --rc genhtml_branch_coverage=1 00:04:08.137 --rc genhtml_function_coverage=1 00:04:08.137 --rc genhtml_legend=1 00:04:08.137 --rc geninfo_all_blocks=1 00:04:08.137 --rc geninfo_unexecuted_blocks=1 00:04:08.137 00:04:08.137 ' 00:04:08.137 11:15:21 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:08.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.137 --rc genhtml_branch_coverage=1 00:04:08.137 --rc genhtml_function_coverage=1 00:04:08.137 --rc genhtml_legend=1 00:04:08.137 --rc geninfo_all_blocks=1 00:04:08.137 --rc geninfo_unexecuted_blocks=1 00:04:08.137 00:04:08.137 ' 00:04:08.137 11:15:21 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:08.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.137 --rc genhtml_branch_coverage=1 00:04:08.137 --rc genhtml_function_coverage=1 00:04:08.137 --rc genhtml_legend=1 00:04:08.137 --rc geninfo_all_blocks=1 00:04:08.137 --rc geninfo_unexecuted_blocks=1 00:04:08.137 00:04:08.137 ' 00:04:08.137 11:15:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.137 11:15:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:08.137 11:15:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:08.137 11:15:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.137 11:15:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.137 11:15:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.137 ************************************ 00:04:08.137 START TEST skip_rpc 00:04:08.137 ************************************ 00:04:08.137 11:15:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:08.137 11:15:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2066240 00:04:08.137 11:15:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.137 11:15:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:08.137 11:15:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:08.137 [2024-11-19 11:15:21.796434] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:08.137 [2024-11-19 11:15:21.796471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066240 ] 00:04:08.137 [2024-11-19 11:15:21.867534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.137 [2024-11-19 11:15:21.907602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.500 11:15:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:13.500 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:13.500 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:13.500 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2066240 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2066240 ']' 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2066240 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2066240 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2066240' 00:04:13.501 killing process with pid 2066240 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2066240 00:04:13.501 11:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2066240 00:04:13.501 00:04:13.501 real 0m5.356s 00:04:13.501 user 0m5.127s 00:04:13.501 sys 0m0.263s 00:04:13.501 11:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.501 11:15:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.501 ************************************ 00:04:13.501 END TEST skip_rpc 00:04:13.501 ************************************ 00:04:13.501 11:15:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:13.501 11:15:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.501 11:15:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.501 11:15:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.501 ************************************ 00:04:13.501 START TEST skip_rpc_with_json 00:04:13.501 ************************************ 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2067186 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2067186 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2067186 ']' 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.501 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.501 [2024-11-19 11:15:27.222537] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:13.501 [2024-11-19 11:15:27.222579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2067186 ] 00:04:13.760 [2024-11-19 11:15:27.296638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.760 [2024-11-19 11:15:27.339149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.020 [2024-11-19 11:15:27.557827] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:14.020 request: 00:04:14.020 { 00:04:14.020 "trtype": "tcp", 00:04:14.020 "method": "nvmf_get_transports", 00:04:14.020 "req_id": 1 00:04:14.020 } 00:04:14.020 Got JSON-RPC error response 00:04:14.020 response: 00:04:14.020 { 00:04:14.020 "code": -19, 00:04:14.020 "message": "No such device" 00:04:14.020 } 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.020 [2024-11-19 11:15:27.569929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.020 11:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.020 { 00:04:14.020 "subsystems": [ 00:04:14.020 { 00:04:14.020 "subsystem": "fsdev", 00:04:14.020 "config": [ 00:04:14.020 { 00:04:14.020 "method": "fsdev_set_opts", 00:04:14.020 "params": { 00:04:14.020 "fsdev_io_pool_size": 65535, 00:04:14.020 "fsdev_io_cache_size": 256 00:04:14.020 } 00:04:14.020 } 00:04:14.020 ] 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "subsystem": "vfio_user_target", 00:04:14.020 "config": null 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "subsystem": "keyring", 00:04:14.020 "config": [] 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "subsystem": "iobuf", 00:04:14.020 "config": [ 00:04:14.020 { 00:04:14.020 "method": "iobuf_set_options", 00:04:14.020 "params": { 00:04:14.020 "small_pool_count": 8192, 00:04:14.020 "large_pool_count": 1024, 00:04:14.020 "small_bufsize": 8192, 00:04:14.020 "large_bufsize": 135168, 00:04:14.020 "enable_numa": false 00:04:14.020 } 00:04:14.020 } 00:04:14.020 ] 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "subsystem": "sock", 00:04:14.020 "config": [ 00:04:14.020 { 00:04:14.020 "method": "sock_set_default_impl", 00:04:14.020 "params": { 00:04:14.020 "impl_name": "posix" 00:04:14.020 } 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "method": "sock_impl_set_options", 00:04:14.020 "params": { 00:04:14.020 "impl_name": "ssl", 00:04:14.020 "recv_buf_size": 4096, 00:04:14.020 "send_buf_size": 4096, 00:04:14.020 "enable_recv_pipe": true, 00:04:14.020 "enable_quickack": false, 00:04:14.020 "enable_placement_id": 0, 00:04:14.020 "enable_zerocopy_send_server": true, 00:04:14.020 "enable_zerocopy_send_client": false, 00:04:14.020 "zerocopy_threshold": 0, 00:04:14.020 "tls_version": 0, 00:04:14.020 "enable_ktls": false 00:04:14.020 } 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "method": "sock_impl_set_options", 00:04:14.020 "params": { 00:04:14.020 "impl_name": "posix", 00:04:14.020 "recv_buf_size": 2097152, 00:04:14.020 "send_buf_size": 2097152, 00:04:14.020 "enable_recv_pipe": true, 00:04:14.020 "enable_quickack": false, 00:04:14.020 "enable_placement_id": 0, 00:04:14.020 "enable_zerocopy_send_server": true, 00:04:14.020 "enable_zerocopy_send_client": false, 00:04:14.020 "zerocopy_threshold": 0, 00:04:14.020 "tls_version": 0, 00:04:14.020 "enable_ktls": false 00:04:14.020 } 00:04:14.020 } 00:04:14.020 ] 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "subsystem": "vmd", 00:04:14.020 "config": [] 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "subsystem": "accel", 00:04:14.020 "config": [ 00:04:14.020 { 00:04:14.020 "method": "accel_set_options", 00:04:14.020 "params": { 00:04:14.020 "small_cache_size": 128, 00:04:14.020 "large_cache_size": 16, 00:04:14.020 "task_count": 2048, 00:04:14.020 "sequence_count": 2048, 00:04:14.020 "buf_count": 2048 00:04:14.020 } 00:04:14.020 } 00:04:14.020 ] 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "subsystem": "bdev", 00:04:14.020 "config": [ 00:04:14.020 { 00:04:14.020 "method": "bdev_set_options", 00:04:14.020 "params": { 00:04:14.020 "bdev_io_pool_size": 65535, 00:04:14.020 "bdev_io_cache_size": 256, 00:04:14.020 "bdev_auto_examine": true, 00:04:14.020 "iobuf_small_cache_size": 128, 00:04:14.020 "iobuf_large_cache_size": 16 00:04:14.020 } 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "method": "bdev_raid_set_options", 00:04:14.020 "params": { 00:04:14.020 "process_window_size_kb": 1024, 00:04:14.020 "process_max_bandwidth_mb_sec": 0 00:04:14.020 } 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "method": "bdev_iscsi_set_options", 00:04:14.020 "params": { 00:04:14.020 "timeout_sec": 30 00:04:14.020 } 00:04:14.020 }, 00:04:14.020 { 00:04:14.020 "method": "bdev_nvme_set_options", 00:04:14.020 "params": { 00:04:14.020 "action_on_timeout": "none", 00:04:14.020 "timeout_us": 0, 00:04:14.020 "timeout_admin_us": 0, 00:04:14.020 "keep_alive_timeout_ms": 10000, 00:04:14.020 "arbitration_burst": 0, 00:04:14.020 "low_priority_weight": 0, 00:04:14.020 "medium_priority_weight": 0, 00:04:14.020 "high_priority_weight": 0, 00:04:14.020 "nvme_adminq_poll_period_us": 10000, 00:04:14.020 "nvme_ioq_poll_period_us": 0, 00:04:14.020 "io_queue_requests": 0, 00:04:14.020 "delay_cmd_submit": true, 00:04:14.020 "transport_retry_count": 4, 00:04:14.020 "bdev_retry_count": 3, 00:04:14.021 "transport_ack_timeout": 0, 00:04:14.021 "ctrlr_loss_timeout_sec": 0, 00:04:14.021 "reconnect_delay_sec": 0, 00:04:14.021 "fast_io_fail_timeout_sec": 0, 00:04:14.021 "disable_auto_failback": false, 00:04:14.021 "generate_uuids": false, 00:04:14.021 "transport_tos": 0, 00:04:14.021 "nvme_error_stat": false, 00:04:14.021 "rdma_srq_size": 0, 00:04:14.021 "io_path_stat": false, 00:04:14.021 "allow_accel_sequence": false, 00:04:14.021 "rdma_max_cq_size": 0, 00:04:14.021 "rdma_cm_event_timeout_ms": 0, 00:04:14.021 "dhchap_digests": [ 00:04:14.021 "sha256", 00:04:14.021 "sha384", 00:04:14.021 "sha512" 00:04:14.021 ], 00:04:14.021 "dhchap_dhgroups": [ 00:04:14.021 "null", 00:04:14.021 "ffdhe2048", 00:04:14.021 "ffdhe3072", 00:04:14.021 "ffdhe4096", 00:04:14.021 "ffdhe6144", 00:04:14.021 "ffdhe8192" 00:04:14.021 ] 00:04:14.021 } 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "method": "bdev_nvme_set_hotplug", 00:04:14.021 "params": { 00:04:14.021 "period_us": 100000, 00:04:14.021 "enable": false 00:04:14.021 } 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "method": "bdev_wait_for_examine" 00:04:14.021 } 00:04:14.021 ] 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "subsystem": "scsi", 00:04:14.021 "config": null 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "subsystem": "scheduler", 00:04:14.021 "config": [ 00:04:14.021 { 00:04:14.021 "method": "framework_set_scheduler", 00:04:14.021 "params": { 00:04:14.021 "name": "static" 00:04:14.021 } 00:04:14.021 } 00:04:14.021 ] 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "subsystem": "vhost_scsi", 00:04:14.021 "config": [] 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "subsystem": "vhost_blk", 00:04:14.021 "config": [] 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "subsystem": "ublk", 00:04:14.021 "config": [] 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "subsystem": "nbd", 00:04:14.021 "config": [] 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "subsystem": "nvmf", 00:04:14.021 "config": [ 00:04:14.021 { 00:04:14.021 "method": "nvmf_set_config", 00:04:14.021 "params": { 00:04:14.021 "discovery_filter": "match_any", 00:04:14.021 "admin_cmd_passthru": { 00:04:14.021 "identify_ctrlr": false 00:04:14.021 }, 00:04:14.021 "dhchap_digests": [ 00:04:14.021 "sha256", 00:04:14.021 "sha384", 00:04:14.021 "sha512" 00:04:14.021 ], 00:04:14.021 "dhchap_dhgroups": [ 00:04:14.021 "null", 00:04:14.021 "ffdhe2048", 00:04:14.021 "ffdhe3072", 00:04:14.021 "ffdhe4096", 00:04:14.021 "ffdhe6144", 00:04:14.021 "ffdhe8192" 00:04:14.021 ] 00:04:14.021 } 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "method": "nvmf_set_max_subsystems", 00:04:14.021 "params": { 00:04:14.021 "max_subsystems": 1024 00:04:14.021 } 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "method": "nvmf_set_crdt", 00:04:14.021 "params": { 00:04:14.021 "crdt1": 0, 00:04:14.021 "crdt2": 0, 00:04:14.021 "crdt3": 0 00:04:14.021 } 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "method": "nvmf_create_transport", 00:04:14.021 "params": { 00:04:14.021 "trtype": "TCP", 00:04:14.021 "max_queue_depth": 128, 00:04:14.021 "max_io_qpairs_per_ctrlr": 127, 00:04:14.021 "in_capsule_data_size": 4096, 00:04:14.021 "max_io_size": 131072, 00:04:14.021 "io_unit_size": 131072, 00:04:14.021 "max_aq_depth": 128, 00:04:14.021 "num_shared_buffers": 511, 00:04:14.021 "buf_cache_size": 4294967295, 00:04:14.021 "dif_insert_or_strip": false, 00:04:14.021 "zcopy": false, 00:04:14.021 "c2h_success": true, 00:04:14.021 "sock_priority": 0, 00:04:14.021 "abort_timeout_sec": 1, 00:04:14.021 "ack_timeout": 0, 00:04:14.021 "data_wr_pool_size": 0 00:04:14.021 } 00:04:14.021 } 00:04:14.021 ] 00:04:14.021 }, 00:04:14.021 { 00:04:14.021 "subsystem": "iscsi", 00:04:14.021 "config": [ 00:04:14.021 { 00:04:14.021 "method": "iscsi_set_options", 00:04:14.021 "params": { 00:04:14.021 "node_base": "iqn.2016-06.io.spdk", 00:04:14.021 "max_sessions": 128, 00:04:14.021 "max_connections_per_session": 2, 00:04:14.021 "max_queue_depth": 64, 00:04:14.021 "default_time2wait": 2, 00:04:14.021 "default_time2retain": 20, 00:04:14.021 "first_burst_length": 8192, 00:04:14.021 "immediate_data": true, 00:04:14.021 "allow_duplicated_isid": false, 00:04:14.021 "error_recovery_level": 0, 00:04:14.021 "nop_timeout": 60, 00:04:14.021 "nop_in_interval": 30, 00:04:14.021 "disable_chap": false, 00:04:14.021 "require_chap": false, 00:04:14.021 "mutual_chap": false, 00:04:14.021 "chap_group": 0, 00:04:14.021 "max_large_datain_per_connection": 64, 00:04:14.021 "max_r2t_per_connection": 4, 00:04:14.021 "pdu_pool_size": 36864, 00:04:14.021 "immediate_data_pool_size": 16384, 00:04:14.021 "data_out_pool_size": 2048 00:04:14.021 } 00:04:14.021 } 00:04:14.021 ] 00:04:14.021 } 00:04:14.021 ] 00:04:14.021 } 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2067186 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2067186 ']' 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2067186 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2067186 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2067186' 00:04:14.021 killing process with pid 2067186 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2067186 00:04:14.021 11:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2067186 00:04:14.590 11:15:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2067419 00:04:14.590 11:15:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.590 11:15:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2067419 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2067419 ']' 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2067419 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2067419 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2067419' 00:04:19.865 killing process with pid 2067419 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2067419 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2067419 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:19.865 00:04:19.865 real 0m6.284s 00:04:19.865 user 0m5.983s 00:04:19.865 sys 0m0.592s 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.865 ************************************ 00:04:19.865 END TEST skip_rpc_with_json 00:04:19.865 ************************************ 00:04:19.865 11:15:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:19.865 11:15:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.865 11:15:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.865 11:15:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.865 ************************************ 00:04:19.865 START TEST skip_rpc_with_delay 00:04:19.865 ************************************ 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.865 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.866 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:19.866 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:19.866 [2024-11-19 11:15:33.577738] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:19.866 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:19.866 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:19.866 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:19.866 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:19.866 00:04:19.866 real 0m0.067s 00:04:19.866 user 0m0.046s 00:04:19.866 sys 0m0.020s 00:04:19.866 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.866 11:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:19.866 ************************************ 00:04:19.866 END TEST skip_rpc_with_delay 00:04:19.866 ************************************ 00:04:19.866 11:15:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:19.866 11:15:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:19.866 11:15:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:19.866 11:15:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.866 11:15:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.866 11:15:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.126 ************************************ 00:04:20.126 START TEST exit_on_failed_rpc_init 00:04:20.126 ************************************ 00:04:20.126 11:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:20.126 11:15:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2068392 00:04:20.126 11:15:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2068392 00:04:20.126 11:15:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.126 11:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2068392 ']' 00:04:20.126 11:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.126 11:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.126 11:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.126 11:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.126 11:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.126 [2024-11-19 11:15:33.719468] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:20.126 [2024-11-19 11:15:33.719512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068392 ] 00:04:20.126 [2024-11-19 11:15:33.795827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.126 [2024-11-19 11:15:33.838115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:20.385 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.385 [2024-11-19 11:15:34.103657] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:20.385 [2024-11-19 11:15:34.103704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068406 ] 00:04:20.645 [2024-11-19 11:15:34.179429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.645 [2024-11-19 11:15:34.220386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.645 [2024-11-19 11:15:34.220439] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:20.645 [2024-11-19 11:15:34.220448] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:20.645 [2024-11-19 11:15:34.220457] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2068392 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2068392 ']' 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2068392 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2068392 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2068392' 00:04:20.645 killing process with pid 2068392 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2068392 00:04:20.645 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2068392 00:04:20.904 00:04:20.904 real 0m0.953s 00:04:20.904 user 0m1.013s 00:04:20.904 sys 0m0.394s 00:04:20.904 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.904 11:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.904 ************************************ 00:04:20.904 END TEST exit_on_failed_rpc_init 00:04:20.904 ************************************ 00:04:20.904 11:15:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:20.904 00:04:20.904 real 0m13.120s 00:04:20.904 user 0m12.382s 00:04:20.904 sys 0m1.550s 00:04:20.904 11:15:34 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.904 11:15:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.904 ************************************ 00:04:20.904 END TEST skip_rpc 00:04:20.904 ************************************ 00:04:21.163 11:15:34 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:21.163 11:15:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.163 11:15:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.163 11:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:21.163 ************************************ 00:04:21.163 START TEST rpc_client 00:04:21.163 ************************************ 00:04:21.163 11:15:34 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:21.163 * Looking for test storage... 00:04:21.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:21.163 11:15:34 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.163 11:15:34 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.163 11:15:34 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.163 11:15:34 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.163 11:15:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:21.163 11:15:34 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.163 11:15:34 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.163 --rc genhtml_branch_coverage=1 00:04:21.163 --rc genhtml_function_coverage=1 00:04:21.163 --rc genhtml_legend=1 00:04:21.163 --rc geninfo_all_blocks=1 00:04:21.163 --rc geninfo_unexecuted_blocks=1 00:04:21.163 00:04:21.163 ' 00:04:21.163 11:15:34 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.163 --rc genhtml_branch_coverage=1 00:04:21.163 --rc genhtml_function_coverage=1 00:04:21.163 --rc genhtml_legend=1 00:04:21.163 --rc geninfo_all_blocks=1 00:04:21.163 --rc geninfo_unexecuted_blocks=1 00:04:21.163 00:04:21.163 ' 00:04:21.163 11:15:34 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.163 --rc genhtml_branch_coverage=1 00:04:21.163 --rc genhtml_function_coverage=1 00:04:21.163 --rc genhtml_legend=1 00:04:21.163 --rc geninfo_all_blocks=1 00:04:21.163 --rc geninfo_unexecuted_blocks=1 00:04:21.163 00:04:21.163 ' 00:04:21.164 11:15:34 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.164 --rc genhtml_branch_coverage=1 00:04:21.164 --rc genhtml_function_coverage=1 00:04:21.164 --rc genhtml_legend=1 00:04:21.164 --rc geninfo_all_blocks=1 00:04:21.164 --rc geninfo_unexecuted_blocks=1 00:04:21.164 00:04:21.164 ' 00:04:21.164 11:15:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:21.164 OK 00:04:21.164 11:15:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:21.164 00:04:21.164 real 0m0.202s 00:04:21.164 user 0m0.131s 00:04:21.164 sys 0m0.086s 00:04:21.164 11:15:34 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.164 11:15:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:21.164 ************************************ 00:04:21.164 END TEST rpc_client 00:04:21.164 ************************************ 00:04:21.424 11:15:34 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:21.424 11:15:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.424 11:15:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.424 11:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:21.424 ************************************ 00:04:21.424 START TEST json_config 00:04:21.424 ************************************ 00:04:21.424 11:15:34 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:21.424 11:15:35 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.424 11:15:35 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.424 11:15:35 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.424 11:15:35 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.424 11:15:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.424 11:15:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.424 11:15:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.424 11:15:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.424 11:15:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.424 11:15:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.424 11:15:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.424 11:15:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.424 11:15:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.424 11:15:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.424 11:15:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.424 11:15:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:21.424 11:15:35 json_config -- scripts/common.sh@345 -- # : 1 00:04:21.424 11:15:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.424 11:15:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.424 11:15:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:21.424 11:15:35 json_config -- scripts/common.sh@353 -- # local d=1 00:04:21.424 11:15:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.424 11:15:35 json_config -- scripts/common.sh@355 -- # echo 1 00:04:21.424 11:15:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.424 11:15:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:21.424 11:15:35 json_config -- scripts/common.sh@353 -- # local d=2 00:04:21.424 11:15:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.424 11:15:35 json_config -- scripts/common.sh@355 -- # echo 2 00:04:21.424 11:15:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.424 11:15:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.424 11:15:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.424 11:15:35 json_config -- scripts/common.sh@368 -- # return 0 00:04:21.424 11:15:35 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.424 11:15:35 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.424 --rc genhtml_branch_coverage=1 00:04:21.424 --rc genhtml_function_coverage=1 00:04:21.424 --rc genhtml_legend=1 00:04:21.424 --rc geninfo_all_blocks=1 00:04:21.424 --rc geninfo_unexecuted_blocks=1 00:04:21.424 00:04:21.424 ' 00:04:21.424 11:15:35 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.424 --rc genhtml_branch_coverage=1 00:04:21.424 --rc genhtml_function_coverage=1 00:04:21.424 --rc genhtml_legend=1 00:04:21.424 --rc geninfo_all_blocks=1 00:04:21.424 --rc geninfo_unexecuted_blocks=1 00:04:21.424 00:04:21.424 ' 00:04:21.424 11:15:35 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.424 --rc genhtml_branch_coverage=1 00:04:21.424 --rc genhtml_function_coverage=1 00:04:21.424 --rc genhtml_legend=1 00:04:21.424 --rc geninfo_all_blocks=1 00:04:21.424 --rc geninfo_unexecuted_blocks=1 00:04:21.424 00:04:21.424 ' 00:04:21.424 11:15:35 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.424 --rc genhtml_branch_coverage=1 00:04:21.424 --rc genhtml_function_coverage=1 00:04:21.424 --rc genhtml_legend=1 00:04:21.424 --rc geninfo_all_blocks=1 00:04:21.424 --rc geninfo_unexecuted_blocks=1 00:04:21.424 00:04:21.424 ' 00:04:21.424 11:15:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:21.424 11:15:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:21.424 11:15:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.424 11:15:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:21.425 11:15:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:21.425 11:15:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.425 11:15:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.425 11:15:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.425 11:15:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.425 11:15:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.425 11:15:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.425 11:15:35 json_config -- paths/export.sh@5 -- # export PATH 00:04:21.425 11:15:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@51 -- # : 0 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:21.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:21.425 11:15:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:21.425 INFO: JSON configuration test init 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:21.425 11:15:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.425 11:15:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:21.425 11:15:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.425 11:15:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.425 11:15:35 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:21.425 11:15:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:21.425 11:15:35 json_config -- json_config/common.sh@10 -- # shift 00:04:21.425 11:15:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:21.425 11:15:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:21.425 11:15:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:21.425 11:15:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.425 11:15:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.425 11:15:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2068758 00:04:21.425 11:15:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:21.425 Waiting for target to run... 00:04:21.425 11:15:35 json_config -- json_config/common.sh@25 -- # waitforlisten 2068758 /var/tmp/spdk_tgt.sock 00:04:21.425 11:15:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 2068758 ']' 00:04:21.425 11:15:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.425 11:15:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:21.425 11:15:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.425 11:15:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.425 11:15:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.425 11:15:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.684 [2024-11-19 11:15:35.242241] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:21.684 [2024-11-19 11:15:35.242290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068758 ] 00:04:21.944 [2024-11-19 11:15:35.691897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.203 [2024-11-19 11:15:35.749219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.462 11:15:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.462 11:15:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:22.462 11:15:36 json_config -- json_config/common.sh@26 -- # echo '' 00:04:22.462 00:04:22.462 11:15:36 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:22.462 11:15:36 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:22.462 11:15:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.462 11:15:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.462 11:15:36 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:22.462 11:15:36 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:22.462 11:15:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.462 11:15:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.462 11:15:36 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:22.462 11:15:36 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:22.462 11:15:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:25.751 11:15:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.751 11:15:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:25.751 11:15:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@54 -- # sort 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:25.751 11:15:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.751 11:15:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:25.751 11:15:39 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:25.752 11:15:39 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:25.752 11:15:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.752 11:15:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.752 11:15:39 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:25.752 11:15:39 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:25.752 11:15:39 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:25.752 11:15:39 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:25.752 11:15:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.010 MallocForNvmf0 00:04:26.010 11:15:39 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.010 11:15:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.269 MallocForNvmf1 00:04:26.269 11:15:39 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.269 11:15:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.269 [2024-11-19 11:15:40.007006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.269 11:15:40 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:26.269 11:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:26.528 11:15:40 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:26.528 11:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:26.787 11:15:40 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:26.787 11:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.045 11:15:40 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.045 11:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.045 [2024-11-19 11:15:40.785451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:27.045 11:15:40 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:27.046 11:15:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.046 11:15:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.304 11:15:40 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:27.304 11:15:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.304 11:15:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.304 11:15:40 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:27.304 11:15:40 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:27.304 11:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:27.304 MallocBdevForConfigChangeCheck 00:04:27.304 11:15:41 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:27.304 11:15:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.304 11:15:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.564 11:15:41 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:27.564 11:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.823 11:15:41 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:27.823 INFO: shutting down applications... 00:04:27.823 11:15:41 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:27.823 11:15:41 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:27.823 11:15:41 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:27.823 11:15:41 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:29.727 Calling clear_iscsi_subsystem 00:04:29.727 Calling clear_nvmf_subsystem 00:04:29.727 Calling clear_nbd_subsystem 00:04:29.727 Calling clear_ublk_subsystem 00:04:29.727 Calling clear_vhost_blk_subsystem 00:04:29.727 Calling clear_vhost_scsi_subsystem 00:04:29.727 Calling clear_bdev_subsystem 00:04:29.727 11:15:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:29.727 11:15:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:29.727 11:15:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:29.727 11:15:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.727 11:15:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:29.727 11:15:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:29.727 11:15:43 json_config -- json_config/json_config.sh@352 -- # break 00:04:29.727 11:15:43 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:29.727 11:15:43 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:29.727 11:15:43 json_config -- json_config/common.sh@31 -- # local app=target 00:04:29.727 11:15:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.727 11:15:43 json_config -- json_config/common.sh@35 -- # [[ -n 2068758 ]] 00:04:29.727 11:15:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2068758 00:04:29.727 11:15:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.727 11:15:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.727 11:15:43 json_config -- json_config/common.sh@41 -- # kill -0 2068758 00:04:29.727 11:15:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.298 11:15:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.298 11:15:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.298 11:15:43 json_config -- json_config/common.sh@41 -- # kill -0 2068758 00:04:30.298 11:15:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.298 11:15:43 json_config -- json_config/common.sh@43 -- # break 00:04:30.298 11:15:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.298 11:15:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.298 SPDK target shutdown done 00:04:30.298 11:15:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:30.298 INFO: relaunching applications... 00:04:30.298 11:15:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.298 11:15:43 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.298 11:15:43 json_config -- json_config/common.sh@10 -- # shift 00:04:30.298 11:15:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.298 11:15:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.298 11:15:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.298 11:15:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.298 11:15:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.298 11:15:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2070279 00:04:30.298 11:15:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.298 Waiting for target to run... 00:04:30.298 11:15:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.298 11:15:43 json_config -- json_config/common.sh@25 -- # waitforlisten 2070279 /var/tmp/spdk_tgt.sock 00:04:30.298 11:15:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 2070279 ']' 00:04:30.298 11:15:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.298 11:15:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.298 11:15:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.298 11:15:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.298 11:15:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.298 [2024-11-19 11:15:43.975473] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:30.298 [2024-11-19 11:15:43.975528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2070279 ] 00:04:30.867 [2024-11-19 11:15:44.434042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.867 [2024-11-19 11:15:44.487861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.155 [2024-11-19 11:15:47.516952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.155 [2024-11-19 11:15:47.549313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.414 11:15:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.414 11:15:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:34.414 11:15:48 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.414 00:04:34.415 11:15:48 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:34.673 11:15:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.673 INFO: Checking if target configuration is the same... 00:04:34.673 11:15:48 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.673 11:15:48 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:34.673 11:15:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.673 + '[' 2 -ne 2 ']' 00:04:34.673 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:34.673 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:34.673 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:34.673 +++ basename /dev/fd/62 00:04:34.673 ++ mktemp /tmp/62.XXX 00:04:34.673 + tmp_file_1=/tmp/62.9ll 00:04:34.673 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.673 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.673 + tmp_file_2=/tmp/spdk_tgt_config.json.PNr 00:04:34.673 + ret=0 00:04:34.673 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.932 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.932 + diff -u /tmp/62.9ll /tmp/spdk_tgt_config.json.PNr 00:04:34.932 + echo 'INFO: JSON config files are the same' 00:04:34.932 INFO: JSON config files are the same 00:04:34.932 + rm /tmp/62.9ll /tmp/spdk_tgt_config.json.PNr 00:04:34.932 + exit 0 00:04:34.932 11:15:48 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:34.932 11:15:48 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:34.932 INFO: changing configuration and checking if this can be detected... 00:04:34.932 11:15:48 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.932 11:15:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.192 11:15:48 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.192 11:15:48 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:35.192 11:15:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.192 + '[' 2 -ne 2 ']' 00:04:35.192 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:35.192 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:35.192 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:35.192 +++ basename /dev/fd/62 00:04:35.192 ++ mktemp /tmp/62.XXX 00:04:35.192 + tmp_file_1=/tmp/62.epQ 00:04:35.192 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.192 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.192 + tmp_file_2=/tmp/spdk_tgt_config.json.pER 00:04:35.192 + ret=0 00:04:35.192 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.451 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.451 + diff -u /tmp/62.epQ /tmp/spdk_tgt_config.json.pER 00:04:35.451 + ret=1 00:04:35.451 + echo '=== Start of file: /tmp/62.epQ ===' 00:04:35.451 + cat /tmp/62.epQ 00:04:35.451 + echo '=== End of file: /tmp/62.epQ ===' 00:04:35.451 + echo '' 00:04:35.451 + echo '=== Start of file: /tmp/spdk_tgt_config.json.pER ===' 00:04:35.451 + cat /tmp/spdk_tgt_config.json.pER 00:04:35.451 + echo '=== End of file: /tmp/spdk_tgt_config.json.pER ===' 00:04:35.451 + echo '' 00:04:35.451 + rm /tmp/62.epQ /tmp/spdk_tgt_config.json.pER 00:04:35.451 + exit 1 00:04:35.451 11:15:49 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:35.451 INFO: configuration change detected. 00:04:35.451 11:15:49 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:35.451 11:15:49 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:35.451 11:15:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.451 11:15:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.452 11:15:49 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:35.452 11:15:49 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:35.452 11:15:49 json_config -- json_config/json_config.sh@324 -- # [[ -n 2070279 ]] 00:04:35.452 11:15:49 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:35.452 11:15:49 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:35.452 11:15:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.452 11:15:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.452 11:15:49 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:35.452 11:15:49 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:35.452 11:15:49 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:35.452 11:15:49 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:35.452 11:15:49 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:35.452 11:15:49 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:35.452 11:15:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.452 11:15:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.710 11:15:49 json_config -- json_config/json_config.sh@330 -- # killprocess 2070279 00:04:35.710 11:15:49 json_config -- common/autotest_common.sh@954 -- # '[' -z 2070279 ']' 00:04:35.710 11:15:49 json_config -- common/autotest_common.sh@958 -- # kill -0 2070279 00:04:35.710 11:15:49 json_config -- common/autotest_common.sh@959 -- # uname 00:04:35.710 11:15:49 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.710 11:15:49 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2070279 00:04:35.710 11:15:49 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.710 11:15:49 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.710 11:15:49 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2070279' 00:04:35.710 killing process with pid 2070279 00:04:35.710 11:15:49 json_config -- common/autotest_common.sh@973 -- # kill 2070279 00:04:35.710 11:15:49 json_config -- common/autotest_common.sh@978 -- # wait 2070279 00:04:37.086 11:15:50 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.086 11:15:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:37.086 11:15:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.086 11:15:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.086 11:15:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:37.086 11:15:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:37.086 INFO: Success 00:04:37.086 00:04:37.086 real 0m15.820s 00:04:37.086 user 0m16.234s 00:04:37.086 sys 0m2.779s 00:04:37.086 11:15:50 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.086 11:15:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.087 ************************************ 00:04:37.087 END TEST json_config 00:04:37.087 ************************************ 00:04:37.087 11:15:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:37.087 11:15:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.087 11:15:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.087 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:04:37.347 ************************************ 00:04:37.347 START TEST json_config_extra_key 00:04:37.347 ************************************ 00:04:37.347 11:15:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:37.347 11:15:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.347 11:15:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.347 11:15:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.347 11:15:51 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:37.347 11:15:51 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.347 11:15:51 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.347 --rc genhtml_branch_coverage=1 00:04:37.347 --rc genhtml_function_coverage=1 00:04:37.347 --rc genhtml_legend=1 00:04:37.347 --rc geninfo_all_blocks=1 00:04:37.347 --rc geninfo_unexecuted_blocks=1 00:04:37.347 00:04:37.347 ' 00:04:37.347 11:15:51 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.347 --rc genhtml_branch_coverage=1 00:04:37.347 --rc genhtml_function_coverage=1 00:04:37.347 --rc genhtml_legend=1 00:04:37.347 --rc geninfo_all_blocks=1 00:04:37.347 --rc geninfo_unexecuted_blocks=1 00:04:37.347 00:04:37.347 ' 00:04:37.347 11:15:51 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.347 --rc genhtml_branch_coverage=1 00:04:37.347 --rc genhtml_function_coverage=1 00:04:37.347 --rc genhtml_legend=1 00:04:37.347 --rc geninfo_all_blocks=1 00:04:37.347 --rc geninfo_unexecuted_blocks=1 00:04:37.347 00:04:37.347 ' 00:04:37.347 11:15:51 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.347 --rc genhtml_branch_coverage=1 00:04:37.347 --rc genhtml_function_coverage=1 00:04:37.347 --rc genhtml_legend=1 00:04:37.347 --rc geninfo_all_blocks=1 00:04:37.347 --rc geninfo_unexecuted_blocks=1 00:04:37.347 00:04:37.347 ' 00:04:37.347 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.347 11:15:51 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.347 11:15:51 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.347 11:15:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.347 11:15:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.347 11:15:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.347 11:15:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:37.348 11:15:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.348 11:15:51 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:37.348 11:15:51 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:37.348 11:15:51 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:37.348 11:15:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.348 11:15:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.348 11:15:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.348 11:15:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:37.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:37.348 11:15:51 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:37.348 11:15:51 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:37.348 11:15:51 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:37.348 INFO: launching applications... 00:04:37.348 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:37.348 11:15:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:37.348 11:15:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:37.348 11:15:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.348 11:15:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.348 11:15:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.348 11:15:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.348 11:15:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.348 11:15:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2071554 00:04:37.348 11:15:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.348 Waiting for target to run... 00:04:37.348 11:15:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2071554 /var/tmp/spdk_tgt.sock 00:04:37.348 11:15:51 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2071554 ']' 00:04:37.348 11:15:51 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:37.348 11:15:51 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.348 11:15:51 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.348 11:15:51 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.348 11:15:51 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.348 11:15:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.348 [2024-11-19 11:15:51.118399] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:37.348 [2024-11-19 11:15:51.118451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071554 ] 00:04:37.916 [2024-11-19 11:15:51.410137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.916 [2024-11-19 11:15:51.445034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.175 11:15:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.175 11:15:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:38.175 11:15:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:38.175 00:04:38.175 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:38.175 INFO: shutting down applications... 00:04:38.175 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:38.175 11:15:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:38.175 11:15:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:38.175 11:15:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2071554 ]] 00:04:38.175 11:15:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2071554 00:04:38.175 11:15:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:38.175 11:15:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.175 11:15:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2071554 00:04:38.175 11:15:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.743 11:15:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.743 11:15:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.743 11:15:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2071554 00:04:38.743 11:15:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:38.743 11:15:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:38.743 11:15:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:38.743 11:15:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:38.743 SPDK target shutdown done 00:04:38.743 11:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:38.743 Success 00:04:38.743 00:04:38.743 real 0m1.576s 00:04:38.743 user 0m1.358s 00:04:38.743 sys 0m0.402s 00:04:38.743 11:15:52 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.743 11:15:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:38.743 ************************************ 00:04:38.743 END TEST json_config_extra_key 00:04:38.743 ************************************ 00:04:38.743 11:15:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.743 11:15:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.743 11:15:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.743 11:15:52 -- common/autotest_common.sh@10 -- # set +x 00:04:39.002 ************************************ 00:04:39.002 START TEST alias_rpc 00:04:39.002 ************************************ 00:04:39.002 11:15:52 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.002 * Looking for test storage... 00:04:39.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:39.002 11:15:52 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.002 11:15:52 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.002 11:15:52 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.002 11:15:52 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.002 11:15:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.002 11:15:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.002 11:15:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.003 11:15:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:39.003 11:15:52 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.003 11:15:52 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.003 --rc genhtml_branch_coverage=1 00:04:39.003 --rc genhtml_function_coverage=1 00:04:39.003 --rc genhtml_legend=1 00:04:39.003 --rc geninfo_all_blocks=1 00:04:39.003 --rc geninfo_unexecuted_blocks=1 00:04:39.003 00:04:39.003 ' 00:04:39.003 11:15:52 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.003 --rc genhtml_branch_coverage=1 00:04:39.003 --rc genhtml_function_coverage=1 00:04:39.003 --rc genhtml_legend=1 00:04:39.003 --rc geninfo_all_blocks=1 00:04:39.003 --rc geninfo_unexecuted_blocks=1 00:04:39.003 00:04:39.003 ' 00:04:39.003 11:15:52 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.003 --rc genhtml_branch_coverage=1 00:04:39.003 --rc genhtml_function_coverage=1 00:04:39.003 --rc genhtml_legend=1 00:04:39.003 --rc geninfo_all_blocks=1 00:04:39.003 --rc geninfo_unexecuted_blocks=1 00:04:39.003 00:04:39.003 ' 00:04:39.003 11:15:52 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.003 --rc genhtml_branch_coverage=1 00:04:39.003 --rc genhtml_function_coverage=1 00:04:39.003 --rc genhtml_legend=1 00:04:39.003 --rc geninfo_all_blocks=1 00:04:39.003 --rc geninfo_unexecuted_blocks=1 00:04:39.003 00:04:39.003 ' 00:04:39.003 11:15:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:39.003 11:15:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2071958 00:04:39.003 11:15:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.003 11:15:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2071958 00:04:39.003 11:15:52 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2071958 ']' 00:04:39.003 11:15:52 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.003 11:15:52 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.003 11:15:52 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.003 11:15:52 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.003 11:15:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.003 [2024-11-19 11:15:52.761116] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:39.003 [2024-11-19 11:15:52.761180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071958 ] 00:04:39.262 [2024-11-19 11:15:52.836213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.262 [2024-11-19 11:15:52.876806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.520 11:15:53 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.520 11:15:53 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:39.520 11:15:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:39.779 11:15:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2071958 00:04:39.779 11:15:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2071958 ']' 00:04:39.779 11:15:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2071958 00:04:39.779 11:15:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:39.779 11:15:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.779 11:15:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2071958 00:04:39.779 11:15:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.779 11:15:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.779 11:15:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2071958' 00:04:39.779 killing process with pid 2071958 00:04:39.779 11:15:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 2071958 00:04:39.779 11:15:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 2071958 00:04:40.038 00:04:40.038 real 0m1.152s 00:04:40.038 user 0m1.181s 00:04:40.038 sys 0m0.421s 00:04:40.038 11:15:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.038 11:15:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.038 ************************************ 00:04:40.038 END TEST alias_rpc 00:04:40.038 ************************************ 00:04:40.038 11:15:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:40.038 11:15:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:40.038 11:15:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.038 11:15:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.038 11:15:53 -- common/autotest_common.sh@10 -- # set +x 00:04:40.038 ************************************ 00:04:40.038 START TEST spdkcli_tcp 00:04:40.038 ************************************ 00:04:40.038 11:15:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:40.298 * Looking for test storage... 00:04:40.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.298 11:15:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.298 --rc genhtml_branch_coverage=1 00:04:40.298 --rc genhtml_function_coverage=1 00:04:40.298 --rc genhtml_legend=1 00:04:40.298 --rc geninfo_all_blocks=1 00:04:40.298 --rc geninfo_unexecuted_blocks=1 00:04:40.298 00:04:40.298 ' 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.298 --rc genhtml_branch_coverage=1 00:04:40.298 --rc genhtml_function_coverage=1 00:04:40.298 --rc genhtml_legend=1 00:04:40.298 --rc geninfo_all_blocks=1 00:04:40.298 --rc geninfo_unexecuted_blocks=1 00:04:40.298 00:04:40.298 ' 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.298 --rc genhtml_branch_coverage=1 00:04:40.298 --rc genhtml_function_coverage=1 00:04:40.298 --rc genhtml_legend=1 00:04:40.298 --rc geninfo_all_blocks=1 00:04:40.298 --rc geninfo_unexecuted_blocks=1 00:04:40.298 00:04:40.298 ' 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.298 --rc genhtml_branch_coverage=1 00:04:40.298 --rc genhtml_function_coverage=1 00:04:40.298 --rc genhtml_legend=1 00:04:40.298 --rc geninfo_all_blocks=1 00:04:40.298 --rc geninfo_unexecuted_blocks=1 00:04:40.298 00:04:40.298 ' 00:04:40.298 11:15:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:40.298 11:15:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:40.298 11:15:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:40.298 11:15:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:40.298 11:15:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:40.298 11:15:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:40.298 11:15:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.298 11:15:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2072156 00:04:40.298 11:15:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2072156 00:04:40.298 11:15:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2072156 ']' 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.298 11:15:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.298 [2024-11-19 11:15:53.991036] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:40.298 [2024-11-19 11:15:53.991083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2072156 ] 00:04:40.298 [2024-11-19 11:15:54.067591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.557 [2024-11-19 11:15:54.112176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.557 [2024-11-19 11:15:54.112177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.557 11:15:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.557 11:15:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:40.557 11:15:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2072360 00:04:40.557 11:15:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:40.557 11:15:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:40.816 [ 00:04:40.816 "bdev_malloc_delete", 00:04:40.816 "bdev_malloc_create", 00:04:40.816 "bdev_null_resize", 00:04:40.816 "bdev_null_delete", 00:04:40.816 "bdev_null_create", 00:04:40.816 "bdev_nvme_cuse_unregister", 00:04:40.816 "bdev_nvme_cuse_register", 00:04:40.816 "bdev_opal_new_user", 00:04:40.816 "bdev_opal_set_lock_state", 00:04:40.816 "bdev_opal_delete", 00:04:40.816 "bdev_opal_get_info", 00:04:40.816 "bdev_opal_create", 00:04:40.816 "bdev_nvme_opal_revert", 00:04:40.816 "bdev_nvme_opal_init", 00:04:40.816 "bdev_nvme_send_cmd", 00:04:40.816 "bdev_nvme_set_keys", 00:04:40.816 "bdev_nvme_get_path_iostat", 00:04:40.816 "bdev_nvme_get_mdns_discovery_info", 00:04:40.816 "bdev_nvme_stop_mdns_discovery", 00:04:40.816 "bdev_nvme_start_mdns_discovery", 00:04:40.816 "bdev_nvme_set_multipath_policy", 00:04:40.816 "bdev_nvme_set_preferred_path", 00:04:40.816 "bdev_nvme_get_io_paths", 00:04:40.816 "bdev_nvme_remove_error_injection", 00:04:40.816 "bdev_nvme_add_error_injection", 00:04:40.816 "bdev_nvme_get_discovery_info", 00:04:40.816 "bdev_nvme_stop_discovery", 00:04:40.816 "bdev_nvme_start_discovery", 00:04:40.816 "bdev_nvme_get_controller_health_info", 00:04:40.816 "bdev_nvme_disable_controller", 00:04:40.816 "bdev_nvme_enable_controller", 00:04:40.816 "bdev_nvme_reset_controller", 00:04:40.816 "bdev_nvme_get_transport_statistics", 00:04:40.816 "bdev_nvme_apply_firmware", 00:04:40.816 "bdev_nvme_detach_controller", 00:04:40.816 "bdev_nvme_get_controllers", 00:04:40.816 "bdev_nvme_attach_controller", 00:04:40.816 "bdev_nvme_set_hotplug", 00:04:40.816 "bdev_nvme_set_options", 00:04:40.816 "bdev_passthru_delete", 00:04:40.816 "bdev_passthru_create", 00:04:40.816 "bdev_lvol_set_parent_bdev", 00:04:40.816 "bdev_lvol_set_parent", 00:04:40.816 "bdev_lvol_check_shallow_copy", 00:04:40.816 "bdev_lvol_start_shallow_copy", 00:04:40.816 "bdev_lvol_grow_lvstore", 00:04:40.816 "bdev_lvol_get_lvols", 00:04:40.816 "bdev_lvol_get_lvstores", 00:04:40.816 "bdev_lvol_delete", 00:04:40.816 "bdev_lvol_set_read_only", 00:04:40.816 "bdev_lvol_resize", 00:04:40.816 "bdev_lvol_decouple_parent", 00:04:40.816 "bdev_lvol_inflate", 00:04:40.816 "bdev_lvol_rename", 00:04:40.816 "bdev_lvol_clone_bdev", 00:04:40.816 "bdev_lvol_clone", 00:04:40.816 "bdev_lvol_snapshot", 00:04:40.816 "bdev_lvol_create", 00:04:40.816 "bdev_lvol_delete_lvstore", 00:04:40.816 "bdev_lvol_rename_lvstore", 00:04:40.816 "bdev_lvol_create_lvstore", 00:04:40.816 "bdev_raid_set_options", 00:04:40.816 "bdev_raid_remove_base_bdev", 00:04:40.816 "bdev_raid_add_base_bdev", 00:04:40.816 "bdev_raid_delete", 00:04:40.816 "bdev_raid_create", 00:04:40.816 "bdev_raid_get_bdevs", 00:04:40.816 "bdev_error_inject_error", 00:04:40.816 "bdev_error_delete", 00:04:40.816 "bdev_error_create", 00:04:40.816 "bdev_split_delete", 00:04:40.816 "bdev_split_create", 00:04:40.816 "bdev_delay_delete", 00:04:40.816 "bdev_delay_create", 00:04:40.816 "bdev_delay_update_latency", 00:04:40.816 "bdev_zone_block_delete", 00:04:40.816 "bdev_zone_block_create", 00:04:40.816 "blobfs_create", 00:04:40.816 "blobfs_detect", 00:04:40.816 "blobfs_set_cache_size", 00:04:40.816 "bdev_aio_delete", 00:04:40.816 "bdev_aio_rescan", 00:04:40.816 "bdev_aio_create", 00:04:40.816 "bdev_ftl_set_property", 00:04:40.816 "bdev_ftl_get_properties", 00:04:40.816 "bdev_ftl_get_stats", 00:04:40.816 "bdev_ftl_unmap", 00:04:40.816 "bdev_ftl_unload", 00:04:40.816 "bdev_ftl_delete", 00:04:40.816 "bdev_ftl_load", 00:04:40.816 "bdev_ftl_create", 00:04:40.816 "bdev_virtio_attach_controller", 00:04:40.816 "bdev_virtio_scsi_get_devices", 00:04:40.816 "bdev_virtio_detach_controller", 00:04:40.816 "bdev_virtio_blk_set_hotplug", 00:04:40.816 "bdev_iscsi_delete", 00:04:40.816 "bdev_iscsi_create", 00:04:40.816 "bdev_iscsi_set_options", 00:04:40.816 "accel_error_inject_error", 00:04:40.816 "ioat_scan_accel_module", 00:04:40.816 "dsa_scan_accel_module", 00:04:40.816 "iaa_scan_accel_module", 00:04:40.816 "vfu_virtio_create_fs_endpoint", 00:04:40.816 "vfu_virtio_create_scsi_endpoint", 00:04:40.816 "vfu_virtio_scsi_remove_target", 00:04:40.816 "vfu_virtio_scsi_add_target", 00:04:40.816 "vfu_virtio_create_blk_endpoint", 00:04:40.816 "vfu_virtio_delete_endpoint", 00:04:40.816 "keyring_file_remove_key", 00:04:40.816 "keyring_file_add_key", 00:04:40.816 "keyring_linux_set_options", 00:04:40.816 "fsdev_aio_delete", 00:04:40.816 "fsdev_aio_create", 00:04:40.816 "iscsi_get_histogram", 00:04:40.816 "iscsi_enable_histogram", 00:04:40.816 "iscsi_set_options", 00:04:40.816 "iscsi_get_auth_groups", 00:04:40.816 "iscsi_auth_group_remove_secret", 00:04:40.816 "iscsi_auth_group_add_secret", 00:04:40.816 "iscsi_delete_auth_group", 00:04:40.816 "iscsi_create_auth_group", 00:04:40.816 "iscsi_set_discovery_auth", 00:04:40.816 "iscsi_get_options", 00:04:40.816 "iscsi_target_node_request_logout", 00:04:40.816 "iscsi_target_node_set_redirect", 00:04:40.816 "iscsi_target_node_set_auth", 00:04:40.816 "iscsi_target_node_add_lun", 00:04:40.816 "iscsi_get_stats", 00:04:40.816 "iscsi_get_connections", 00:04:40.816 "iscsi_portal_group_set_auth", 00:04:40.816 "iscsi_start_portal_group", 00:04:40.816 "iscsi_delete_portal_group", 00:04:40.816 "iscsi_create_portal_group", 00:04:40.816 "iscsi_get_portal_groups", 00:04:40.816 "iscsi_delete_target_node", 00:04:40.816 "iscsi_target_node_remove_pg_ig_maps", 00:04:40.816 "iscsi_target_node_add_pg_ig_maps", 00:04:40.816 "iscsi_create_target_node", 00:04:40.816 "iscsi_get_target_nodes", 00:04:40.816 "iscsi_delete_initiator_group", 00:04:40.816 "iscsi_initiator_group_remove_initiators", 00:04:40.816 "iscsi_initiator_group_add_initiators", 00:04:40.816 "iscsi_create_initiator_group", 00:04:40.816 "iscsi_get_initiator_groups", 00:04:40.816 "nvmf_set_crdt", 00:04:40.816 "nvmf_set_config", 00:04:40.816 "nvmf_set_max_subsystems", 00:04:40.817 "nvmf_stop_mdns_prr", 00:04:40.817 "nvmf_publish_mdns_prr", 00:04:40.817 "nvmf_subsystem_get_listeners", 00:04:40.817 "nvmf_subsystem_get_qpairs", 00:04:40.817 "nvmf_subsystem_get_controllers", 00:04:40.817 "nvmf_get_stats", 00:04:40.817 "nvmf_get_transports", 00:04:40.817 "nvmf_create_transport", 00:04:40.817 "nvmf_get_targets", 00:04:40.817 "nvmf_delete_target", 00:04:40.817 "nvmf_create_target", 00:04:40.817 "nvmf_subsystem_allow_any_host", 00:04:40.817 "nvmf_subsystem_set_keys", 00:04:40.817 "nvmf_subsystem_remove_host", 00:04:40.817 "nvmf_subsystem_add_host", 00:04:40.817 "nvmf_ns_remove_host", 00:04:40.817 "nvmf_ns_add_host", 00:04:40.817 "nvmf_subsystem_remove_ns", 00:04:40.817 "nvmf_subsystem_set_ns_ana_group", 00:04:40.817 "nvmf_subsystem_add_ns", 00:04:40.817 "nvmf_subsystem_listener_set_ana_state", 00:04:40.817 "nvmf_discovery_get_referrals", 00:04:40.817 "nvmf_discovery_remove_referral", 00:04:40.817 "nvmf_discovery_add_referral", 00:04:40.817 "nvmf_subsystem_remove_listener", 00:04:40.817 "nvmf_subsystem_add_listener", 00:04:40.817 "nvmf_delete_subsystem", 00:04:40.817 "nvmf_create_subsystem", 00:04:40.817 "nvmf_get_subsystems", 00:04:40.817 "env_dpdk_get_mem_stats", 00:04:40.817 "nbd_get_disks", 00:04:40.817 "nbd_stop_disk", 00:04:40.817 "nbd_start_disk", 00:04:40.817 "ublk_recover_disk", 00:04:40.817 "ublk_get_disks", 00:04:40.817 "ublk_stop_disk", 00:04:40.817 "ublk_start_disk", 00:04:40.817 "ublk_destroy_target", 00:04:40.817 "ublk_create_target", 00:04:40.817 "virtio_blk_create_transport", 00:04:40.817 "virtio_blk_get_transports", 00:04:40.817 "vhost_controller_set_coalescing", 00:04:40.817 "vhost_get_controllers", 00:04:40.817 "vhost_delete_controller", 00:04:40.817 "vhost_create_blk_controller", 00:04:40.817 "vhost_scsi_controller_remove_target", 00:04:40.817 "vhost_scsi_controller_add_target", 00:04:40.817 "vhost_start_scsi_controller", 00:04:40.817 "vhost_create_scsi_controller", 00:04:40.817 "thread_set_cpumask", 00:04:40.817 "scheduler_set_options", 00:04:40.817 "framework_get_governor", 00:04:40.817 "framework_get_scheduler", 00:04:40.817 "framework_set_scheduler", 00:04:40.817 "framework_get_reactors", 00:04:40.817 "thread_get_io_channels", 00:04:40.817 "thread_get_pollers", 00:04:40.817 "thread_get_stats", 00:04:40.817 "framework_monitor_context_switch", 00:04:40.817 "spdk_kill_instance", 00:04:40.817 "log_enable_timestamps", 00:04:40.817 "log_get_flags", 00:04:40.817 "log_clear_flag", 00:04:40.817 "log_set_flag", 00:04:40.817 "log_get_level", 00:04:40.817 "log_set_level", 00:04:40.817 "log_get_print_level", 00:04:40.817 "log_set_print_level", 00:04:40.817 "framework_enable_cpumask_locks", 00:04:40.817 "framework_disable_cpumask_locks", 00:04:40.817 "framework_wait_init", 00:04:40.817 "framework_start_init", 00:04:40.817 "scsi_get_devices", 00:04:40.817 "bdev_get_histogram", 00:04:40.817 "bdev_enable_histogram", 00:04:40.817 "bdev_set_qos_limit", 00:04:40.817 "bdev_set_qd_sampling_period", 00:04:40.817 "bdev_get_bdevs", 00:04:40.817 "bdev_reset_iostat", 00:04:40.817 "bdev_get_iostat", 00:04:40.817 "bdev_examine", 00:04:40.817 "bdev_wait_for_examine", 00:04:40.817 "bdev_set_options", 00:04:40.817 "accel_get_stats", 00:04:40.817 "accel_set_options", 00:04:40.817 "accel_set_driver", 00:04:40.817 "accel_crypto_key_destroy", 00:04:40.817 "accel_crypto_keys_get", 00:04:40.817 "accel_crypto_key_create", 00:04:40.817 "accel_assign_opc", 00:04:40.817 "accel_get_module_info", 00:04:40.817 "accel_get_opc_assignments", 00:04:40.817 "vmd_rescan", 00:04:40.817 "vmd_remove_device", 00:04:40.817 "vmd_enable", 00:04:40.817 "sock_get_default_impl", 00:04:40.817 "sock_set_default_impl", 00:04:40.817 "sock_impl_set_options", 00:04:40.817 "sock_impl_get_options", 00:04:40.817 "iobuf_get_stats", 00:04:40.817 "iobuf_set_options", 00:04:40.817 "keyring_get_keys", 00:04:40.817 "vfu_tgt_set_base_path", 00:04:40.817 "framework_get_pci_devices", 00:04:40.817 "framework_get_config", 00:04:40.817 "framework_get_subsystems", 00:04:40.817 "fsdev_set_opts", 00:04:40.817 "fsdev_get_opts", 00:04:40.817 "trace_get_info", 00:04:40.817 "trace_get_tpoint_group_mask", 00:04:40.817 "trace_disable_tpoint_group", 00:04:40.817 "trace_enable_tpoint_group", 00:04:40.817 "trace_clear_tpoint_mask", 00:04:40.817 "trace_set_tpoint_mask", 00:04:40.817 "notify_get_notifications", 00:04:40.817 "notify_get_types", 00:04:40.817 "spdk_get_version", 00:04:40.817 "rpc_get_methods" 00:04:40.817 ] 00:04:40.817 11:15:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:40.817 11:15:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.817 11:15:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.817 11:15:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:40.817 11:15:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2072156 00:04:40.817 11:15:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2072156 ']' 00:04:40.817 11:15:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2072156 00:04:40.817 11:15:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:40.817 11:15:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.817 11:15:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2072156 00:04:41.075 11:15:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.075 11:15:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.075 11:15:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2072156' 00:04:41.075 killing process with pid 2072156 00:04:41.075 11:15:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2072156 00:04:41.075 11:15:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2072156 00:04:41.334 00:04:41.334 real 0m1.147s 00:04:41.334 user 0m1.896s 00:04:41.334 sys 0m0.459s 00:04:41.334 11:15:54 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.334 11:15:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.334 ************************************ 00:04:41.334 END TEST spdkcli_tcp 00:04:41.334 ************************************ 00:04:41.334 11:15:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.334 11:15:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.334 11:15:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.334 11:15:54 -- common/autotest_common.sh@10 -- # set +x 00:04:41.334 ************************************ 00:04:41.334 START TEST dpdk_mem_utility 00:04:41.334 ************************************ 00:04:41.334 11:15:54 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.334 * Looking for test storage... 00:04:41.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:41.334 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.334 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.334 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.592 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.593 11:15:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:41.593 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.593 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.593 --rc genhtml_branch_coverage=1 00:04:41.593 --rc genhtml_function_coverage=1 00:04:41.593 --rc genhtml_legend=1 00:04:41.593 --rc geninfo_all_blocks=1 00:04:41.593 --rc geninfo_unexecuted_blocks=1 00:04:41.593 00:04:41.593 ' 00:04:41.593 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.593 --rc genhtml_branch_coverage=1 00:04:41.593 --rc genhtml_function_coverage=1 00:04:41.593 --rc genhtml_legend=1 00:04:41.593 --rc geninfo_all_blocks=1 00:04:41.593 --rc geninfo_unexecuted_blocks=1 00:04:41.593 00:04:41.593 ' 00:04:41.593 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.593 --rc genhtml_branch_coverage=1 00:04:41.593 --rc genhtml_function_coverage=1 00:04:41.593 --rc genhtml_legend=1 00:04:41.593 --rc geninfo_all_blocks=1 00:04:41.593 --rc geninfo_unexecuted_blocks=1 00:04:41.593 00:04:41.593 ' 00:04:41.593 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.593 --rc genhtml_branch_coverage=1 00:04:41.593 --rc genhtml_function_coverage=1 00:04:41.593 --rc genhtml_legend=1 00:04:41.593 --rc geninfo_all_blocks=1 00:04:41.593 --rc geninfo_unexecuted_blocks=1 00:04:41.593 00:04:41.593 ' 00:04:41.593 11:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:41.593 11:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2072436 00:04:41.593 11:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2072436 00:04:41.593 11:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.593 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2072436 ']' 00:04:41.593 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.593 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.593 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.593 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.593 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.593 [2024-11-19 11:15:55.197465] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:41.593 [2024-11-19 11:15:55.197514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2072436 ] 00:04:41.593 [2024-11-19 11:15:55.271611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.593 [2024-11-19 11:15:55.314560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.851 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.851 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:41.851 11:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:41.851 11:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:41.851 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.851 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.851 { 00:04:41.851 "filename": "/tmp/spdk_mem_dump.txt" 00:04:41.851 } 00:04:41.851 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.851 11:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:41.851 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:41.851 1 heaps totaling size 810.000000 MiB 00:04:41.851 size: 810.000000 MiB heap id: 0 00:04:41.851 end heaps---------- 00:04:41.851 9 mempools totaling size 595.772034 MiB 00:04:41.851 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:41.851 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:41.851 size: 92.545471 MiB name: bdev_io_2072436 00:04:41.851 size: 50.003479 MiB name: msgpool_2072436 00:04:41.851 size: 36.509338 MiB name: fsdev_io_2072436 00:04:41.851 size: 21.763794 MiB name: PDU_Pool 00:04:41.851 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:41.851 size: 4.133484 MiB name: evtpool_2072436 00:04:41.851 size: 0.026123 MiB name: Session_Pool 00:04:41.851 end mempools------- 00:04:41.851 6 memzones totaling size 4.142822 MiB 00:04:41.851 size: 1.000366 MiB name: RG_ring_0_2072436 00:04:41.851 size: 1.000366 MiB name: RG_ring_1_2072436 00:04:41.851 size: 1.000366 MiB name: RG_ring_4_2072436 00:04:41.851 size: 1.000366 MiB name: RG_ring_5_2072436 00:04:41.851 size: 0.125366 MiB name: RG_ring_2_2072436 00:04:41.851 size: 0.015991 MiB name: RG_ring_3_2072436 00:04:41.851 end memzones------- 00:04:41.851 11:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:42.110 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:42.111 list of free elements. size: 10.862488 MiB 00:04:42.111 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:42.111 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:42.111 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:42.111 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:42.111 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:42.111 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:42.111 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:42.111 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:42.111 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:42.111 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:42.111 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:42.111 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:42.111 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:42.111 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:42.111 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:42.111 list of standard malloc elements. size: 199.218628 MiB 00:04:42.111 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:42.111 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:42.111 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:42.111 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:42.111 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:42.111 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:42.111 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:42.111 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:42.111 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:42.111 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:42.111 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:42.111 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:42.111 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:42.111 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:42.111 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:42.111 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:42.111 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:42.111 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:42.111 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:42.111 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:42.111 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:42.111 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:42.111 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:42.111 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:42.111 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:42.111 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:42.111 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:42.111 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:42.111 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:42.111 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:42.111 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:42.111 list of memzone associated elements. size: 599.918884 MiB 00:04:42.111 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:42.111 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:42.111 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:42.111 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:42.111 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:42.111 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2072436_0 00:04:42.111 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:42.111 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2072436_0 00:04:42.111 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:42.111 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2072436_0 00:04:42.111 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:42.111 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:42.111 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:42.111 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:42.111 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:42.111 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2072436_0 00:04:42.111 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:42.111 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2072436 00:04:42.111 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:42.111 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2072436 00:04:42.111 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:42.111 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:42.111 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:42.111 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:42.111 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:42.111 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:42.111 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:42.111 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:42.111 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:42.111 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2072436 00:04:42.111 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:42.111 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2072436 00:04:42.111 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:42.111 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2072436 00:04:42.111 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:42.111 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2072436 00:04:42.111 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:42.111 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2072436 00:04:42.111 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:42.111 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2072436 00:04:42.111 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:42.111 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:42.111 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:42.111 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:42.111 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:42.111 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:42.111 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:42.111 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2072436 00:04:42.111 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:42.111 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2072436 00:04:42.111 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:42.111 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:42.111 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:42.111 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:42.111 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:42.111 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2072436 00:04:42.111 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:42.111 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:42.111 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:42.111 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2072436 00:04:42.111 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:42.111 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2072436 00:04:42.111 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:42.111 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2072436 00:04:42.111 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:42.111 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:42.111 11:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:42.111 11:15:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2072436 00:04:42.111 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2072436 ']' 00:04:42.111 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2072436 00:04:42.111 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:42.111 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.111 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2072436 00:04:42.111 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.111 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.111 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2072436' 00:04:42.111 killing process with pid 2072436 00:04:42.111 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2072436 00:04:42.111 11:15:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2072436 00:04:42.371 00:04:42.371 real 0m1.032s 00:04:42.371 user 0m0.985s 00:04:42.371 sys 0m0.403s 00:04:42.371 11:15:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.371 11:15:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.371 ************************************ 00:04:42.371 END TEST dpdk_mem_utility 00:04:42.371 ************************************ 00:04:42.371 11:15:56 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:42.371 11:15:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.371 11:15:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.371 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.371 ************************************ 00:04:42.371 START TEST event 00:04:42.371 ************************************ 00:04:42.371 11:15:56 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:42.630 * Looking for test storage... 00:04:42.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:42.630 11:15:56 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.630 11:15:56 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.630 11:15:56 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.630 11:15:56 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.630 11:15:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.630 11:15:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.630 11:15:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.630 11:15:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.630 11:15:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.630 11:15:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.630 11:15:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.630 11:15:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.630 11:15:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.630 11:15:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.630 11:15:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.630 11:15:56 event -- scripts/common.sh@344 -- # case "$op" in 00:04:42.630 11:15:56 event -- scripts/common.sh@345 -- # : 1 00:04:42.630 11:15:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.630 11:15:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.630 11:15:56 event -- scripts/common.sh@365 -- # decimal 1 00:04:42.630 11:15:56 event -- scripts/common.sh@353 -- # local d=1 00:04:42.630 11:15:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.630 11:15:56 event -- scripts/common.sh@355 -- # echo 1 00:04:42.630 11:15:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.630 11:15:56 event -- scripts/common.sh@366 -- # decimal 2 00:04:42.630 11:15:56 event -- scripts/common.sh@353 -- # local d=2 00:04:42.630 11:15:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.630 11:15:56 event -- scripts/common.sh@355 -- # echo 2 00:04:42.630 11:15:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.630 11:15:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.630 11:15:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.630 11:15:56 event -- scripts/common.sh@368 -- # return 0 00:04:42.630 11:15:56 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.630 11:15:56 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.630 --rc genhtml_branch_coverage=1 00:04:42.630 --rc genhtml_function_coverage=1 00:04:42.630 --rc genhtml_legend=1 00:04:42.630 --rc geninfo_all_blocks=1 00:04:42.630 --rc geninfo_unexecuted_blocks=1 00:04:42.630 00:04:42.630 ' 00:04:42.630 11:15:56 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.630 --rc genhtml_branch_coverage=1 00:04:42.630 --rc genhtml_function_coverage=1 00:04:42.630 --rc genhtml_legend=1 00:04:42.630 --rc geninfo_all_blocks=1 00:04:42.630 --rc geninfo_unexecuted_blocks=1 00:04:42.630 00:04:42.630 ' 00:04:42.630 11:15:56 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.630 --rc genhtml_branch_coverage=1 00:04:42.630 --rc genhtml_function_coverage=1 00:04:42.630 --rc genhtml_legend=1 00:04:42.630 --rc geninfo_all_blocks=1 00:04:42.630 --rc geninfo_unexecuted_blocks=1 00:04:42.630 00:04:42.630 ' 00:04:42.630 11:15:56 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.630 --rc genhtml_branch_coverage=1 00:04:42.630 --rc genhtml_function_coverage=1 00:04:42.630 --rc genhtml_legend=1 00:04:42.630 --rc geninfo_all_blocks=1 00:04:42.630 --rc geninfo_unexecuted_blocks=1 00:04:42.630 00:04:42.630 ' 00:04:42.630 11:15:56 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:42.630 11:15:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:42.630 11:15:56 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.630 11:15:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:42.630 11:15:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.630 11:15:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.630 ************************************ 00:04:42.630 START TEST event_perf 00:04:42.630 ************************************ 00:04:42.630 11:15:56 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.630 Running I/O for 1 seconds...[2024-11-19 11:15:56.305651] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:42.630 [2024-11-19 11:15:56.305718] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2072732 ] 00:04:42.630 [2024-11-19 11:15:56.382749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:42.889 [2024-11-19 11:15:56.427931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.889 [2024-11-19 11:15:56.428040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.889 [2024-11-19 11:15:56.428074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.889 [2024-11-19 11:15:56.428075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.824 Running I/O for 1 seconds... 00:04:43.824 lcore 0: 201053 00:04:43.824 lcore 1: 201051 00:04:43.824 lcore 2: 201053 00:04:43.824 lcore 3: 201053 00:04:43.824 done. 00:04:43.824 00:04:43.824 real 0m1.183s 00:04:43.824 user 0m4.100s 00:04:43.824 sys 0m0.079s 00:04:43.824 11:15:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.824 11:15:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:43.824 ************************************ 00:04:43.824 END TEST event_perf 00:04:43.824 ************************************ 00:04:43.824 11:15:57 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:43.824 11:15:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:43.824 11:15:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.824 11:15:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.824 ************************************ 00:04:43.824 START TEST event_reactor 00:04:43.824 ************************************ 00:04:43.824 11:15:57 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:43.825 [2024-11-19 11:15:57.553498] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:43.825 [2024-11-19 11:15:57.553559] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2072984 ] 00:04:44.083 [2024-11-19 11:15:57.631568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.083 [2024-11-19 11:15:57.671562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.021 test_start 00:04:45.021 oneshot 00:04:45.021 tick 100 00:04:45.021 tick 100 00:04:45.021 tick 250 00:04:45.021 tick 100 00:04:45.021 tick 100 00:04:45.021 tick 100 00:04:45.021 tick 250 00:04:45.021 tick 500 00:04:45.021 tick 100 00:04:45.021 tick 100 00:04:45.021 tick 250 00:04:45.021 tick 100 00:04:45.021 tick 100 00:04:45.021 test_end 00:04:45.021 00:04:45.021 real 0m1.174s 00:04:45.021 user 0m1.102s 00:04:45.021 sys 0m0.067s 00:04:45.021 11:15:58 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.021 11:15:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:45.021 ************************************ 00:04:45.021 END TEST event_reactor 00:04:45.021 ************************************ 00:04:45.021 11:15:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.021 11:15:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:45.021 11:15:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.021 11:15:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.021 ************************************ 00:04:45.021 START TEST event_reactor_perf 00:04:45.021 ************************************ 00:04:45.021 11:15:58 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.021 [2024-11-19 11:15:58.799455] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:45.021 [2024-11-19 11:15:58.799522] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073238 ] 00:04:45.280 [2024-11-19 11:15:58.879027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.280 [2024-11-19 11:15:58.919390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.218 test_start 00:04:46.218 test_end 00:04:46.218 Performance: 505766 events per second 00:04:46.218 00:04:46.218 real 0m1.179s 00:04:46.218 user 0m1.108s 00:04:46.218 sys 0m0.067s 00:04:46.218 11:15:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.218 11:15:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.218 ************************************ 00:04:46.218 END TEST event_reactor_perf 00:04:46.218 ************************************ 00:04:46.218 11:15:59 event -- event/event.sh@49 -- # uname -s 00:04:46.218 11:15:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:46.218 11:15:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:46.218 11:15:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.218 11:15:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.478 11:15:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.478 ************************************ 00:04:46.478 START TEST event_scheduler 00:04:46.478 ************************************ 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:46.478 * Looking for test storage... 00:04:46.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.478 11:16:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.478 --rc genhtml_branch_coverage=1 00:04:46.478 --rc genhtml_function_coverage=1 00:04:46.478 --rc genhtml_legend=1 00:04:46.478 --rc geninfo_all_blocks=1 00:04:46.478 --rc geninfo_unexecuted_blocks=1 00:04:46.478 00:04:46.478 ' 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.478 --rc genhtml_branch_coverage=1 00:04:46.478 --rc genhtml_function_coverage=1 00:04:46.478 --rc genhtml_legend=1 00:04:46.478 --rc geninfo_all_blocks=1 00:04:46.478 --rc geninfo_unexecuted_blocks=1 00:04:46.478 00:04:46.478 ' 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.478 --rc genhtml_branch_coverage=1 00:04:46.478 --rc genhtml_function_coverage=1 00:04:46.478 --rc genhtml_legend=1 00:04:46.478 --rc geninfo_all_blocks=1 00:04:46.478 --rc geninfo_unexecuted_blocks=1 00:04:46.478 00:04:46.478 ' 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.478 --rc genhtml_branch_coverage=1 00:04:46.478 --rc genhtml_function_coverage=1 00:04:46.478 --rc genhtml_legend=1 00:04:46.478 --rc geninfo_all_blocks=1 00:04:46.478 --rc geninfo_unexecuted_blocks=1 00:04:46.478 00:04:46.478 ' 00:04:46.478 11:16:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:46.478 11:16:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2073521 00:04:46.478 11:16:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.478 11:16:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:46.478 11:16:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2073521 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2073521 ']' 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.478 11:16:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.478 [2024-11-19 11:16:00.253110] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:46.478 [2024-11-19 11:16:00.253157] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073521 ] 00:04:46.739 [2024-11-19 11:16:00.328328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:46.739 [2024-11-19 11:16:00.374641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.739 [2024-11-19 11:16:00.374749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.739 [2024-11-19 11:16:00.374855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.739 [2024-11-19 11:16:00.374855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.739 11:16:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.739 11:16:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:46.739 11:16:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:46.739 11:16:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.739 11:16:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.739 [2024-11-19 11:16:00.407319] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:46.739 [2024-11-19 11:16:00.407336] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:46.739 [2024-11-19 11:16:00.407346] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:46.739 [2024-11-19 11:16:00.407352] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:46.739 [2024-11-19 11:16:00.407357] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:46.739 11:16:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.739 11:16:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:46.739 11:16:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.739 11:16:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.739 [2024-11-19 11:16:00.482050] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:46.739 11:16:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.739 11:16:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:46.739 11:16:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.739 11:16:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.739 11:16:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.739 ************************************ 00:04:46.739 START TEST scheduler_create_thread 00:04:46.739 ************************************ 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.999 2 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.999 3 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.999 4 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.999 5 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.999 6 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.999 7 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.999 8 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.999 9 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.999 10 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.999 11:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.567 11:16:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.567 11:16:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:47.567 11:16:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.567 11:16:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.944 11:16:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.944 11:16:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:48.944 11:16:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:48.944 11:16:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.944 11:16:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.881 11:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.881 00:04:49.881 real 0m3.102s 00:04:49.881 user 0m0.025s 00:04:49.881 sys 0m0.003s 00:04:49.881 11:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.881 11:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.881 ************************************ 00:04:49.881 END TEST scheduler_create_thread 00:04:49.881 ************************************ 00:04:49.881 11:16:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:49.881 11:16:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2073521 00:04:49.881 11:16:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2073521 ']' 00:04:49.881 11:16:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2073521 00:04:49.881 11:16:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:50.140 11:16:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.140 11:16:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2073521 00:04:50.140 11:16:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:50.140 11:16:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:50.140 11:16:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2073521' 00:04:50.140 killing process with pid 2073521 00:04:50.140 11:16:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2073521 00:04:50.140 11:16:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2073521 00:04:50.399 [2024-11-19 11:16:04.001219] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:50.658 00:04:50.658 real 0m4.152s 00:04:50.658 user 0m6.605s 00:04:50.658 sys 0m0.372s 00:04:50.658 11:16:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.658 11:16:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.658 ************************************ 00:04:50.658 END TEST event_scheduler 00:04:50.658 ************************************ 00:04:50.658 11:16:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:50.658 11:16:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:50.658 11:16:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.658 11:16:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.658 11:16:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.658 ************************************ 00:04:50.658 START TEST app_repeat 00:04:50.658 ************************************ 00:04:50.658 11:16:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2074259 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2074259' 00:04:50.658 Process app_repeat pid: 2074259 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:50.658 spdk_app_start Round 0 00:04:50.658 11:16:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2074259 /var/tmp/spdk-nbd.sock 00:04:50.658 11:16:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2074259 ']' 00:04:50.658 11:16:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.658 11:16:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.658 11:16:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.659 11:16:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.659 11:16:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.659 [2024-11-19 11:16:04.301452] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:50.659 [2024-11-19 11:16:04.301503] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074259 ] 00:04:50.659 [2024-11-19 11:16:04.378502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.659 [2024-11-19 11:16:04.418797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.659 [2024-11-19 11:16:04.418798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.917 11:16:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.917 11:16:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:50.917 11:16:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.917 Malloc0 00:04:51.176 11:16:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.176 Malloc1 00:04:51.176 11:16:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.176 11:16:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.435 /dev/nbd0 00:04:51.435 11:16:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.435 11:16:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.435 1+0 records in 00:04:51.435 1+0 records out 00:04:51.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183555 s, 22.3 MB/s 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.435 11:16:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.435 11:16:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.435 11:16:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.435 11:16:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.694 /dev/nbd1 00:04:51.694 11:16:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.694 11:16:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.694 1+0 records in 00:04:51.694 1+0 records out 00:04:51.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227685 s, 18.0 MB/s 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.694 11:16:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.694 11:16:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.694 11:16:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.694 11:16:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.694 11:16:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.694 11:16:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.953 { 00:04:51.953 "nbd_device": "/dev/nbd0", 00:04:51.953 "bdev_name": "Malloc0" 00:04:51.953 }, 00:04:51.953 { 00:04:51.953 "nbd_device": "/dev/nbd1", 00:04:51.953 "bdev_name": "Malloc1" 00:04:51.953 } 00:04:51.953 ]' 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.953 { 00:04:51.953 "nbd_device": "/dev/nbd0", 00:04:51.953 "bdev_name": "Malloc0" 00:04:51.953 }, 00:04:51.953 { 00:04:51.953 "nbd_device": "/dev/nbd1", 00:04:51.953 "bdev_name": "Malloc1" 00:04:51.953 } 00:04:51.953 ]' 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.953 /dev/nbd1' 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.953 /dev/nbd1' 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.953 256+0 records in 00:04:51.953 256+0 records out 00:04:51.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106519 s, 98.4 MB/s 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.953 256+0 records in 00:04:51.953 256+0 records out 00:04:51.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140524 s, 74.6 MB/s 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.953 256+0 records in 00:04:51.953 256+0 records out 00:04:51.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154942 s, 67.7 MB/s 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.953 11:16:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.212 11:16:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.471 11:16:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.471 11:16:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.471 11:16:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.471 11:16:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.471 11:16:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.471 11:16:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.471 11:16:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.471 11:16:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.471 11:16:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.471 11:16:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.471 11:16:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.730 11:16:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.730 11:16:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.989 11:16:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.248 [2024-11-19 11:16:06.800006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.248 [2024-11-19 11:16:06.836919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.248 [2024-11-19 11:16:06.836919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.248 [2024-11-19 11:16:06.877353] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.248 [2024-11-19 11:16:06.877393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.537 11:16:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.537 11:16:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:56.537 spdk_app_start Round 1 00:04:56.537 11:16:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2074259 /var/tmp/spdk-nbd.sock 00:04:56.537 11:16:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2074259 ']' 00:04:56.537 11:16:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.537 11:16:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.537 11:16:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.537 11:16:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.537 11:16:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.537 11:16:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.537 11:16:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:56.537 11:16:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.537 Malloc0 00:04:56.537 11:16:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.537 Malloc1 00:04:56.537 11:16:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.537 11:16:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.795 /dev/nbd0 00:04:56.795 11:16:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.795 11:16:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.795 1+0 records in 00:04:56.795 1+0 records out 00:04:56.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000104361 s, 39.2 MB/s 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:56.795 11:16:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:56.795 11:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.795 11:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.795 11:16:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.054 /dev/nbd1 00:04:57.054 11:16:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.054 11:16:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.054 1+0 records in 00:04:57.054 1+0 records out 00:04:57.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226347 s, 18.1 MB/s 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.054 11:16:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.054 11:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.054 11:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.054 11:16:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.054 11:16:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.054 11:16:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.314 11:16:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.314 { 00:04:57.314 "nbd_device": "/dev/nbd0", 00:04:57.314 "bdev_name": "Malloc0" 00:04:57.314 }, 00:04:57.314 { 00:04:57.314 "nbd_device": "/dev/nbd1", 00:04:57.314 "bdev_name": "Malloc1" 00:04:57.314 } 00:04:57.314 ]' 00:04:57.314 11:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.314 { 00:04:57.314 "nbd_device": "/dev/nbd0", 00:04:57.314 "bdev_name": "Malloc0" 00:04:57.314 }, 00:04:57.314 { 00:04:57.314 "nbd_device": "/dev/nbd1", 00:04:57.314 "bdev_name": "Malloc1" 00:04:57.314 } 00:04:57.314 ]' 00:04:57.314 11:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.314 11:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.314 /dev/nbd1' 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.314 /dev/nbd1' 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.314 256+0 records in 00:04:57.314 256+0 records out 00:04:57.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106608 s, 98.4 MB/s 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.314 256+0 records in 00:04:57.314 256+0 records out 00:04:57.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014907 s, 70.3 MB/s 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.314 256+0 records in 00:04:57.314 256+0 records out 00:04:57.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153575 s, 68.3 MB/s 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.314 11:16:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.573 11:16:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.573 11:16:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.573 11:16:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.573 11:16:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.573 11:16:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.573 11:16:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.573 11:16:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.573 11:16:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.573 11:16:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.573 11:16:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.833 11:16:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.833 11:16:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.833 11:16:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.833 11:16:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.833 11:16:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.833 11:16:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.833 11:16:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.833 11:16:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.833 11:16:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.833 11:16:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.833 11:16:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.093 11:16:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.093 11:16:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.352 11:16:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:58.611 [2024-11-19 11:16:12.141011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.611 [2024-11-19 11:16:12.178325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.611 [2024-11-19 11:16:12.178326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.611 [2024-11-19 11:16:12.220184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:58.611 [2024-11-19 11:16:12.220224] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:01.899 11:16:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.899 11:16:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:01.899 spdk_app_start Round 2 00:05:01.899 11:16:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2074259 /var/tmp/spdk-nbd.sock 00:05:01.899 11:16:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2074259 ']' 00:05:01.899 11:16:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.899 11:16:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.899 11:16:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.899 11:16:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.899 11:16:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.899 11:16:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.899 11:16:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:01.899 11:16:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.899 Malloc0 00:05:01.899 11:16:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.899 Malloc1 00:05:01.899 11:16:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.899 11:16:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.157 /dev/nbd0 00:05:02.157 11:16:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.157 11:16:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.157 1+0 records in 00:05:02.157 1+0 records out 00:05:02.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000117515 s, 34.9 MB/s 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.157 11:16:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.157 11:16:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.157 11:16:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.157 11:16:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.416 /dev/nbd1 00:05:02.416 11:16:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.416 11:16:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.416 1+0 records in 00:05:02.416 1+0 records out 00:05:02.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234523 s, 17.5 MB/s 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.416 11:16:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.416 11:16:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.416 11:16:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.416 11:16:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.416 11:16:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.416 11:16:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.675 { 00:05:02.675 "nbd_device": "/dev/nbd0", 00:05:02.675 "bdev_name": "Malloc0" 00:05:02.675 }, 00:05:02.675 { 00:05:02.675 "nbd_device": "/dev/nbd1", 00:05:02.675 "bdev_name": "Malloc1" 00:05:02.675 } 00:05:02.675 ]' 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.675 { 00:05:02.675 "nbd_device": "/dev/nbd0", 00:05:02.675 "bdev_name": "Malloc0" 00:05:02.675 }, 00:05:02.675 { 00:05:02.675 "nbd_device": "/dev/nbd1", 00:05:02.675 "bdev_name": "Malloc1" 00:05:02.675 } 00:05:02.675 ]' 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.675 /dev/nbd1' 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.675 /dev/nbd1' 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.675 256+0 records in 00:05:02.675 256+0 records out 00:05:02.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106256 s, 98.7 MB/s 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.675 256+0 records in 00:05:02.675 256+0 records out 00:05:02.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147295 s, 71.2 MB/s 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.675 256+0 records in 00:05:02.675 256+0 records out 00:05:02.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150313 s, 69.8 MB/s 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.675 11:16:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.676 11:16:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.934 11:16:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.934 11:16:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.934 11:16:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.934 11:16:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.934 11:16:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.934 11:16:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.934 11:16:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.934 11:16:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.934 11:16:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.934 11:16:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.193 11:16:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.193 11:16:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.193 11:16:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.193 11:16:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.193 11:16:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.193 11:16:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.193 11:16:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.193 11:16:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.193 11:16:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.193 11:16:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.193 11:16:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.452 11:16:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.452 11:16:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.710 11:16:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:03.711 [2024-11-19 11:16:17.477567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.970 [2024-11-19 11:16:17.516144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.970 [2024-11-19 11:16:17.516144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.970 [2024-11-19 11:16:17.557488] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:03.970 [2024-11-19 11:16:17.557527] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.254 11:16:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2074259 /var/tmp/spdk-nbd.sock 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2074259 ']' 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:07.254 11:16:20 event.app_repeat -- event/event.sh@39 -- # killprocess 2074259 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2074259 ']' 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2074259 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2074259 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2074259' 00:05:07.254 killing process with pid 2074259 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2074259 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2074259 00:05:07.254 spdk_app_start is called in Round 0. 00:05:07.254 Shutdown signal received, stop current app iteration 00:05:07.254 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:07.254 spdk_app_start is called in Round 1. 00:05:07.254 Shutdown signal received, stop current app iteration 00:05:07.254 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:07.254 spdk_app_start is called in Round 2. 00:05:07.254 Shutdown signal received, stop current app iteration 00:05:07.254 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:07.254 spdk_app_start is called in Round 3. 00:05:07.254 Shutdown signal received, stop current app iteration 00:05:07.254 11:16:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:07.254 11:16:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:07.254 00:05:07.254 real 0m16.462s 00:05:07.254 user 0m36.224s 00:05:07.254 sys 0m2.571s 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.254 11:16:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.254 ************************************ 00:05:07.254 END TEST app_repeat 00:05:07.254 ************************************ 00:05:07.254 11:16:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:07.254 11:16:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:07.254 11:16:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.254 11:16:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.254 11:16:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.254 ************************************ 00:05:07.254 START TEST cpu_locks 00:05:07.254 ************************************ 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:07.254 * Looking for test storage... 00:05:07.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.254 11:16:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:07.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.254 --rc genhtml_branch_coverage=1 00:05:07.254 --rc genhtml_function_coverage=1 00:05:07.254 --rc genhtml_legend=1 00:05:07.254 --rc geninfo_all_blocks=1 00:05:07.254 --rc geninfo_unexecuted_blocks=1 00:05:07.254 00:05:07.254 ' 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:07.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.254 --rc genhtml_branch_coverage=1 00:05:07.254 --rc genhtml_function_coverage=1 00:05:07.254 --rc genhtml_legend=1 00:05:07.254 --rc geninfo_all_blocks=1 00:05:07.254 --rc geninfo_unexecuted_blocks=1 00:05:07.254 00:05:07.254 ' 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:07.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.254 --rc genhtml_branch_coverage=1 00:05:07.254 --rc genhtml_function_coverage=1 00:05:07.254 --rc genhtml_legend=1 00:05:07.254 --rc geninfo_all_blocks=1 00:05:07.254 --rc geninfo_unexecuted_blocks=1 00:05:07.254 00:05:07.254 ' 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:07.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.254 --rc genhtml_branch_coverage=1 00:05:07.254 --rc genhtml_function_coverage=1 00:05:07.254 --rc genhtml_legend=1 00:05:07.254 --rc geninfo_all_blocks=1 00:05:07.254 --rc geninfo_unexecuted_blocks=1 00:05:07.254 00:05:07.254 ' 00:05:07.254 11:16:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:07.254 11:16:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:07.254 11:16:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:07.254 11:16:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.254 11:16:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.254 ************************************ 00:05:07.254 START TEST default_locks 00:05:07.254 ************************************ 00:05:07.254 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:07.254 11:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2077263 00:05:07.254 11:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2077263 00:05:07.254 11:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.254 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2077263 ']' 00:05:07.254 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.254 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.254 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.254 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.254 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.514 [2024-11-19 11:16:21.060605] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:07.514 [2024-11-19 11:16:21.060648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077263 ] 00:05:07.514 [2024-11-19 11:16:21.133019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.514 [2024-11-19 11:16:21.172881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.774 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.774 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:07.774 11:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2077263 00:05:07.774 11:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2077263 00:05:07.774 11:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.032 lslocks: write error 00:05:08.032 11:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2077263 00:05:08.033 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2077263 ']' 00:05:08.033 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2077263 00:05:08.033 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:08.033 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.033 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2077263 00:05:08.292 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.292 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.292 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2077263' 00:05:08.292 killing process with pid 2077263 00:05:08.292 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2077263 00:05:08.292 11:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2077263 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2077263 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2077263 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2077263 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2077263 ']' 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2077263) - No such process 00:05:08.551 ERROR: process (pid: 2077263) is no longer running 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.551 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:08.552 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:08.552 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.552 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.552 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.552 11:16:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:08.552 11:16:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:08.552 11:16:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:08.552 11:16:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:08.552 00:05:08.552 real 0m1.126s 00:05:08.552 user 0m1.081s 00:05:08.552 sys 0m0.502s 00:05:08.552 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.552 11:16:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.552 ************************************ 00:05:08.552 END TEST default_locks 00:05:08.552 ************************************ 00:05:08.552 11:16:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:08.552 11:16:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.552 11:16:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.552 11:16:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.552 ************************************ 00:05:08.552 START TEST default_locks_via_rpc 00:05:08.552 ************************************ 00:05:08.552 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:08.552 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2077520 00:05:08.552 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2077520 00:05:08.552 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.552 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2077520 ']' 00:05:08.552 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.552 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.552 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.552 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.552 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.552 [2024-11-19 11:16:22.257553] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:08.552 [2024-11-19 11:16:22.257599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077520 ] 00:05:08.811 [2024-11-19 11:16:22.335381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.811 [2024-11-19 11:16:22.376370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2077520 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2077520 00:05:09.070 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.329 11:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2077520 00:05:09.329 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2077520 ']' 00:05:09.329 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2077520 00:05:09.329 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:09.329 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.329 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2077520 00:05:09.329 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.329 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.329 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2077520' 00:05:09.329 killing process with pid 2077520 00:05:09.329 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2077520 00:05:09.329 11:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2077520 00:05:09.589 00:05:09.589 real 0m1.061s 00:05:09.589 user 0m1.015s 00:05:09.589 sys 0m0.501s 00:05:09.589 11:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.589 11:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.589 ************************************ 00:05:09.589 END TEST default_locks_via_rpc 00:05:09.589 ************************************ 00:05:09.589 11:16:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:09.589 11:16:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.589 11:16:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.589 11:16:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.589 ************************************ 00:05:09.589 START TEST non_locking_app_on_locked_coremask 00:05:09.589 ************************************ 00:05:09.589 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:09.589 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2077775 00:05:09.589 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2077775 /var/tmp/spdk.sock 00:05:09.589 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.589 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2077775 ']' 00:05:09.589 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.589 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.589 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.589 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.589 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.862 [2024-11-19 11:16:23.387830] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:09.862 [2024-11-19 11:16:23.387866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077775 ] 00:05:09.862 [2024-11-19 11:16:23.461471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.863 [2024-11-19 11:16:23.501437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.136 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.136 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.136 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2077781 00:05:10.136 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2077781 /var/tmp/spdk2.sock 00:05:10.136 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:10.136 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2077781 ']' 00:05:10.136 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.136 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.136 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.136 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.136 11:16:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.136 [2024-11-19 11:16:23.773475] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:10.136 [2024-11-19 11:16:23.773518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077781 ] 00:05:10.136 [2024-11-19 11:16:23.864010] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.136 [2024-11-19 11:16:23.864040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.395 [2024-11-19 11:16:23.953786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.962 11:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.962 11:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.962 11:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2077775 00:05:10.962 11:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2077775 00:05:10.962 11:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.898 lslocks: write error 00:05:11.898 11:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2077775 00:05:11.899 11:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2077775 ']' 00:05:11.899 11:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2077775 00:05:11.899 11:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:11.899 11:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.899 11:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2077775 00:05:11.899 11:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.899 11:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.899 11:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2077775' 00:05:11.899 killing process with pid 2077775 00:05:11.899 11:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2077775 00:05:11.899 11:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2077775 00:05:12.467 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2077781 00:05:12.467 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2077781 ']' 00:05:12.467 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2077781 00:05:12.467 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:12.467 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.467 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2077781 00:05:12.726 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.726 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.726 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2077781' 00:05:12.726 killing process with pid 2077781 00:05:12.726 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2077781 00:05:12.726 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2077781 00:05:12.985 00:05:12.985 real 0m3.252s 00:05:12.985 user 0m3.430s 00:05:12.985 sys 0m1.121s 00:05:12.985 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.985 11:16:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.985 ************************************ 00:05:12.985 END TEST non_locking_app_on_locked_coremask 00:05:12.985 ************************************ 00:05:12.985 11:16:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:12.985 11:16:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.985 11:16:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.985 11:16:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.985 ************************************ 00:05:12.985 START TEST locking_app_on_unlocked_coremask 00:05:12.985 ************************************ 00:05:12.985 11:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:12.985 11:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2078283 00:05:12.985 11:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2078283 /var/tmp/spdk.sock 00:05:12.985 11:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:12.985 11:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2078283 ']' 00:05:12.985 11:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.986 11:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.986 11:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.986 11:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.986 11:16:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.986 [2024-11-19 11:16:26.709996] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:12.986 [2024-11-19 11:16:26.710040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078283 ] 00:05:13.244 [2024-11-19 11:16:26.782143] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.244 [2024-11-19 11:16:26.782167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.244 [2024-11-19 11:16:26.819628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.504 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.504 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.504 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2078381 00:05:13.504 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2078381 /var/tmp/spdk2.sock 00:05:13.504 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:13.504 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2078381 ']' 00:05:13.504 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.504 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.504 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.504 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.504 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.504 [2024-11-19 11:16:27.095994] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:13.504 [2024-11-19 11:16:27.096043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078381 ] 00:05:13.504 [2024-11-19 11:16:27.186701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.504 [2024-11-19 11:16:27.267551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.440 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.440 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:14.440 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2078381 00:05:14.440 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2078381 00:05:14.440 11:16:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.699 lslocks: write error 00:05:14.699 11:16:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2078283 00:05:14.699 11:16:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2078283 ']' 00:05:14.699 11:16:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2078283 00:05:14.699 11:16:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:14.699 11:16:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.699 11:16:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2078283 00:05:14.959 11:16:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.959 11:16:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.959 11:16:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2078283' 00:05:14.959 killing process with pid 2078283 00:05:14.959 11:16:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2078283 00:05:14.959 11:16:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2078283 00:05:15.527 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2078381 00:05:15.527 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2078381 ']' 00:05:15.527 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2078381 00:05:15.527 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:15.527 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.527 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2078381 00:05:15.527 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.527 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.527 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2078381' 00:05:15.527 killing process with pid 2078381 00:05:15.527 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2078381 00:05:15.527 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2078381 00:05:15.786 00:05:15.786 real 0m2.769s 00:05:15.786 user 0m2.918s 00:05:15.786 sys 0m0.927s 00:05:15.786 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.786 11:16:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.786 ************************************ 00:05:15.786 END TEST locking_app_on_unlocked_coremask 00:05:15.786 ************************************ 00:05:15.786 11:16:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:15.787 11:16:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.787 11:16:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.787 11:16:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.787 ************************************ 00:05:15.787 START TEST locking_app_on_locked_coremask 00:05:15.787 ************************************ 00:05:15.787 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:15.787 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.787 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2078780 00:05:15.787 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2078780 /var/tmp/spdk.sock 00:05:15.787 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2078780 ']' 00:05:15.787 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.787 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.787 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.787 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.787 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.787 [2024-11-19 11:16:29.537566] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:15.787 [2024-11-19 11:16:29.537606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078780 ] 00:05:16.046 [2024-11-19 11:16:29.611934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.046 [2024-11-19 11:16:29.650373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2078913 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2078913 /var/tmp/spdk2.sock 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2078913 /var/tmp/spdk2.sock 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2078913 /var/tmp/spdk2.sock 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2078913 ']' 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.305 11:16:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.305 [2024-11-19 11:16:29.928092] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:16.305 [2024-11-19 11:16:29.928143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078913 ] 00:05:16.305 [2024-11-19 11:16:30.021251] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2078780 has claimed it. 00:05:16.305 [2024-11-19 11:16:30.021295] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:16.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2078913) - No such process 00:05:16.873 ERROR: process (pid: 2078913) is no longer running 00:05:16.873 11:16:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.873 11:16:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:16.873 11:16:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:16.873 11:16:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.873 11:16:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.873 11:16:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.873 11:16:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2078780 00:05:16.873 11:16:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2078780 00:05:16.873 11:16:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.440 lslocks: write error 00:05:17.440 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2078780 00:05:17.440 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2078780 ']' 00:05:17.440 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2078780 00:05:17.440 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.440 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.440 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2078780 00:05:17.440 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.440 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.440 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2078780' 00:05:17.440 killing process with pid 2078780 00:05:17.440 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2078780 00:05:17.440 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2078780 00:05:17.700 00:05:17.700 real 0m1.932s 00:05:17.700 user 0m2.082s 00:05:17.700 sys 0m0.642s 00:05:17.700 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.700 11:16:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.700 ************************************ 00:05:17.700 END TEST locking_app_on_locked_coremask 00:05:17.700 ************************************ 00:05:17.700 11:16:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:17.700 11:16:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.700 11:16:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.700 11:16:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.960 ************************************ 00:05:17.960 START TEST locking_overlapped_coremask 00:05:17.960 ************************************ 00:05:17.960 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:17.960 11:16:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2079266 00:05:17.960 11:16:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2079266 /var/tmp/spdk.sock 00:05:17.960 11:16:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:17.960 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2079266 ']' 00:05:17.960 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.960 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.960 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.960 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.960 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.960 [2024-11-19 11:16:31.554330] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:17.960 [2024-11-19 11:16:31.554375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079266 ] 00:05:17.960 [2024-11-19 11:16:31.628799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.960 [2024-11-19 11:16:31.669334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.960 [2024-11-19 11:16:31.669443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.960 [2024-11-19 11:16:31.669444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2079271 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2079271 /var/tmp/spdk2.sock 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2079271 /var/tmp/spdk2.sock 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2079271 /var/tmp/spdk2.sock 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2079271 ']' 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.220 11:16:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.220 [2024-11-19 11:16:31.946886] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:18.220 [2024-11-19 11:16:31.946927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079271 ] 00:05:18.479 [2024-11-19 11:16:32.040860] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2079266 has claimed it. 00:05:18.479 [2024-11-19 11:16:32.040901] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:19.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2079271) - No such process 00:05:19.048 ERROR: process (pid: 2079271) is no longer running 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2079266 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2079266 ']' 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2079266 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079266 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079266' 00:05:19.048 killing process with pid 2079266 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2079266 00:05:19.048 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2079266 00:05:19.308 00:05:19.308 real 0m1.453s 00:05:19.308 user 0m4.004s 00:05:19.308 sys 0m0.392s 00:05:19.308 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.308 11:16:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.308 ************************************ 00:05:19.308 END TEST locking_overlapped_coremask 00:05:19.308 ************************************ 00:05:19.308 11:16:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:19.308 11:16:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.308 11:16:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.308 11:16:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.308 ************************************ 00:05:19.308 START TEST locking_overlapped_coremask_via_rpc 00:05:19.308 ************************************ 00:05:19.308 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:19.308 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2079530 00:05:19.308 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2079530 /var/tmp/spdk.sock 00:05:19.308 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:19.308 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2079530 ']' 00:05:19.308 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.308 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.308 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.308 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.308 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.308 [2024-11-19 11:16:33.072809] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:19.308 [2024-11-19 11:16:33.072852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079530 ] 00:05:19.567 [2024-11-19 11:16:33.151252] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.567 [2024-11-19 11:16:33.151278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.567 [2024-11-19 11:16:33.194106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.567 [2024-11-19 11:16:33.194210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.567 [2024-11-19 11:16:33.194210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.135 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.135 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.135 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2079625 00:05:20.136 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2079625 /var/tmp/spdk2.sock 00:05:20.136 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:20.136 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2079625 ']' 00:05:20.136 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.136 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.136 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.136 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.136 11:16:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.395 [2024-11-19 11:16:33.960967] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:20.395 [2024-11-19 11:16:33.961018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079625 ] 00:05:20.395 [2024-11-19 11:16:34.053896] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.395 [2024-11-19 11:16:34.053929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.396 [2024-11-19 11:16:34.141683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.396 [2024-11-19 11:16:34.144996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.396 [2024-11-19 11:16:34.144996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.334 [2024-11-19 11:16:34.823021] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2079530 has claimed it. 00:05:21.334 request: 00:05:21.334 { 00:05:21.334 "method": "framework_enable_cpumask_locks", 00:05:21.334 "req_id": 1 00:05:21.334 } 00:05:21.334 Got JSON-RPC error response 00:05:21.334 response: 00:05:21.334 { 00:05:21.334 "code": -32603, 00:05:21.334 "message": "Failed to claim CPU core: 2" 00:05:21.334 } 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2079530 /var/tmp/spdk.sock 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2079530 ']' 00:05:21.334 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.335 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.335 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.335 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.335 11:16:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.335 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.335 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:21.335 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2079625 /var/tmp/spdk2.sock 00:05:21.335 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2079625 ']' 00:05:21.335 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.335 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.335 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.335 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.335 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.594 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.594 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:21.594 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:21.594 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:21.594 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:21.594 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:21.594 00:05:21.594 real 0m2.216s 00:05:21.594 user 0m0.972s 00:05:21.594 sys 0m0.178s 00:05:21.594 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.594 11:16:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.594 ************************************ 00:05:21.594 END TEST locking_overlapped_coremask_via_rpc 00:05:21.594 ************************************ 00:05:21.594 11:16:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:21.594 11:16:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2079530 ]] 00:05:21.594 11:16:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2079530 00:05:21.594 11:16:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2079530 ']' 00:05:21.594 11:16:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2079530 00:05:21.594 11:16:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:21.594 11:16:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.594 11:16:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079530 00:05:21.594 11:16:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.594 11:16:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.595 11:16:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079530' 00:05:21.595 killing process with pid 2079530 00:05:21.595 11:16:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2079530 00:05:21.595 11:16:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2079530 00:05:22.163 11:16:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2079625 ]] 00:05:22.163 11:16:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2079625 00:05:22.163 11:16:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2079625 ']' 00:05:22.163 11:16:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2079625 00:05:22.163 11:16:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:22.164 11:16:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.164 11:16:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079625 00:05:22.164 11:16:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:22.164 11:16:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:22.164 11:16:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079625' 00:05:22.164 killing process with pid 2079625 00:05:22.164 11:16:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2079625 00:05:22.164 11:16:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2079625 00:05:22.423 11:16:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:22.423 11:16:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:22.423 11:16:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2079530 ]] 00:05:22.423 11:16:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2079530 00:05:22.423 11:16:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2079530 ']' 00:05:22.423 11:16:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2079530 00:05:22.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2079530) - No such process 00:05:22.423 11:16:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2079530 is not found' 00:05:22.423 Process with pid 2079530 is not found 00:05:22.423 11:16:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2079625 ]] 00:05:22.423 11:16:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2079625 00:05:22.423 11:16:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2079625 ']' 00:05:22.423 11:16:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2079625 00:05:22.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2079625) - No such process 00:05:22.423 11:16:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2079625 is not found' 00:05:22.423 Process with pid 2079625 is not found 00:05:22.423 11:16:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:22.423 00:05:22.423 real 0m15.207s 00:05:22.423 user 0m26.784s 00:05:22.423 sys 0m5.223s 00:05:22.423 11:16:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.423 11:16:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.423 ************************************ 00:05:22.423 END TEST cpu_locks 00:05:22.423 ************************************ 00:05:22.423 00:05:22.423 real 0m39.965s 00:05:22.423 user 1m16.202s 00:05:22.423 sys 0m8.746s 00:05:22.423 11:16:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.423 11:16:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.423 ************************************ 00:05:22.423 END TEST event 00:05:22.423 ************************************ 00:05:22.423 11:16:36 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:22.423 11:16:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.423 11:16:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.423 11:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.423 ************************************ 00:05:22.423 START TEST thread 00:05:22.423 ************************************ 00:05:22.423 11:16:36 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:22.423 * Looking for test storage... 00:05:22.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:22.423 11:16:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.423 11:16:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.423 11:16:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.683 11:16:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.683 11:16:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.683 11:16:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.683 11:16:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.683 11:16:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.683 11:16:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.683 11:16:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.683 11:16:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.683 11:16:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.683 11:16:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.683 11:16:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.683 11:16:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.683 11:16:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:22.683 11:16:36 thread -- scripts/common.sh@345 -- # : 1 00:05:22.683 11:16:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.683 11:16:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.683 11:16:36 thread -- scripts/common.sh@365 -- # decimal 1 00:05:22.683 11:16:36 thread -- scripts/common.sh@353 -- # local d=1 00:05:22.683 11:16:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.683 11:16:36 thread -- scripts/common.sh@355 -- # echo 1 00:05:22.683 11:16:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.683 11:16:36 thread -- scripts/common.sh@366 -- # decimal 2 00:05:22.683 11:16:36 thread -- scripts/common.sh@353 -- # local d=2 00:05:22.683 11:16:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.683 11:16:36 thread -- scripts/common.sh@355 -- # echo 2 00:05:22.683 11:16:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.683 11:16:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.683 11:16:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.683 11:16:36 thread -- scripts/common.sh@368 -- # return 0 00:05:22.683 11:16:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.683 11:16:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.683 --rc genhtml_branch_coverage=1 00:05:22.683 --rc genhtml_function_coverage=1 00:05:22.683 --rc genhtml_legend=1 00:05:22.683 --rc geninfo_all_blocks=1 00:05:22.683 --rc geninfo_unexecuted_blocks=1 00:05:22.683 00:05:22.683 ' 00:05:22.683 11:16:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.683 --rc genhtml_branch_coverage=1 00:05:22.683 --rc genhtml_function_coverage=1 00:05:22.683 --rc genhtml_legend=1 00:05:22.683 --rc geninfo_all_blocks=1 00:05:22.683 --rc geninfo_unexecuted_blocks=1 00:05:22.683 00:05:22.683 ' 00:05:22.683 11:16:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.683 --rc genhtml_branch_coverage=1 00:05:22.683 --rc genhtml_function_coverage=1 00:05:22.683 --rc genhtml_legend=1 00:05:22.683 --rc geninfo_all_blocks=1 00:05:22.683 --rc geninfo_unexecuted_blocks=1 00:05:22.683 00:05:22.683 ' 00:05:22.683 11:16:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.683 --rc genhtml_branch_coverage=1 00:05:22.683 --rc genhtml_function_coverage=1 00:05:22.683 --rc genhtml_legend=1 00:05:22.683 --rc geninfo_all_blocks=1 00:05:22.683 --rc geninfo_unexecuted_blocks=1 00:05:22.683 00:05:22.683 ' 00:05:22.683 11:16:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:22.683 11:16:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:22.683 11:16:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.683 11:16:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.683 ************************************ 00:05:22.683 START TEST thread_poller_perf 00:05:22.683 ************************************ 00:05:22.683 11:16:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:22.683 [2024-11-19 11:16:36.339313] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:22.683 [2024-11-19 11:16:36.339382] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080117 ] 00:05:22.683 [2024-11-19 11:16:36.416790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.683 [2024-11-19 11:16:36.456967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.683 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:24.063 [2024-11-19T10:16:37.844Z] ====================================== 00:05:24.063 [2024-11-19T10:16:37.844Z] busy:2310386974 (cyc) 00:05:24.063 [2024-11-19T10:16:37.844Z] total_run_count: 413000 00:05:24.063 [2024-11-19T10:16:37.844Z] tsc_hz: 2300000000 (cyc) 00:05:24.063 [2024-11-19T10:16:37.844Z] ====================================== 00:05:24.063 [2024-11-19T10:16:37.844Z] poller_cost: 5594 (cyc), 2432 (nsec) 00:05:24.063 00:05:24.063 real 0m1.184s 00:05:24.063 user 0m1.103s 00:05:24.063 sys 0m0.077s 00:05:24.063 11:16:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.063 11:16:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.063 ************************************ 00:05:24.063 END TEST thread_poller_perf 00:05:24.063 ************************************ 00:05:24.063 11:16:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:24.063 11:16:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:24.063 11:16:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.063 11:16:37 thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.063 ************************************ 00:05:24.063 START TEST thread_poller_perf 00:05:24.063 ************************************ 00:05:24.063 11:16:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:24.063 [2024-11-19 11:16:37.592911] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:24.063 [2024-11-19 11:16:37.592993] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080365 ] 00:05:24.063 [2024-11-19 11:16:37.669567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.063 [2024-11-19 11:16:37.709399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.063 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:25.001 [2024-11-19T10:16:38.782Z] ====================================== 00:05:25.001 [2024-11-19T10:16:38.782Z] busy:2301714220 (cyc) 00:05:25.001 [2024-11-19T10:16:38.782Z] total_run_count: 5371000 00:05:25.001 [2024-11-19T10:16:38.782Z] tsc_hz: 2300000000 (cyc) 00:05:25.001 [2024-11-19T10:16:38.782Z] ====================================== 00:05:25.001 [2024-11-19T10:16:38.782Z] poller_cost: 428 (cyc), 186 (nsec) 00:05:25.001 00:05:25.001 real 0m1.178s 00:05:25.001 user 0m1.094s 00:05:25.001 sys 0m0.080s 00:05:25.001 11:16:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.001 11:16:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.001 ************************************ 00:05:25.001 END TEST thread_poller_perf 00:05:25.001 ************************************ 00:05:25.261 11:16:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:25.261 00:05:25.261 real 0m2.680s 00:05:25.261 user 0m2.358s 00:05:25.261 sys 0m0.337s 00:05:25.261 11:16:38 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.261 11:16:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.261 ************************************ 00:05:25.261 END TEST thread 00:05:25.261 ************************************ 00:05:25.261 11:16:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:25.261 11:16:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:25.261 11:16:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.261 11:16:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.261 11:16:38 -- common/autotest_common.sh@10 -- # set +x 00:05:25.261 ************************************ 00:05:25.261 START TEST app_cmdline 00:05:25.261 ************************************ 00:05:25.261 11:16:38 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:25.261 * Looking for test storage... 00:05:25.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:25.261 11:16:38 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.261 11:16:38 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.261 11:16:38 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.261 11:16:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.261 11:16:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:25.261 11:16:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.261 11:16:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.261 --rc genhtml_branch_coverage=1 00:05:25.261 --rc genhtml_function_coverage=1 00:05:25.261 --rc genhtml_legend=1 00:05:25.261 --rc geninfo_all_blocks=1 00:05:25.261 --rc geninfo_unexecuted_blocks=1 00:05:25.261 00:05:25.261 ' 00:05:25.261 11:16:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.261 --rc genhtml_branch_coverage=1 00:05:25.261 --rc genhtml_function_coverage=1 00:05:25.261 --rc genhtml_legend=1 00:05:25.261 --rc geninfo_all_blocks=1 00:05:25.261 --rc geninfo_unexecuted_blocks=1 00:05:25.261 00:05:25.261 ' 00:05:25.261 11:16:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.261 --rc genhtml_branch_coverage=1 00:05:25.261 --rc genhtml_function_coverage=1 00:05:25.261 --rc genhtml_legend=1 00:05:25.261 --rc geninfo_all_blocks=1 00:05:25.261 --rc geninfo_unexecuted_blocks=1 00:05:25.261 00:05:25.261 ' 00:05:25.261 11:16:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.261 --rc genhtml_branch_coverage=1 00:05:25.261 --rc genhtml_function_coverage=1 00:05:25.261 --rc genhtml_legend=1 00:05:25.261 --rc geninfo_all_blocks=1 00:05:25.261 --rc geninfo_unexecuted_blocks=1 00:05:25.261 00:05:25.261 ' 00:05:25.261 11:16:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:25.261 11:16:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2080660 00:05:25.261 11:16:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2080660 00:05:25.261 11:16:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:25.261 11:16:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2080660 ']' 00:05:25.261 11:16:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.261 11:16:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.261 11:16:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.521 11:16:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.521 11:16:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:25.521 [2024-11-19 11:16:39.088891] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:25.521 [2024-11-19 11:16:39.088937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080660 ] 00:05:25.521 [2024-11-19 11:16:39.164107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.521 [2024-11-19 11:16:39.206882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.780 11:16:39 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.780 11:16:39 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:25.781 11:16:39 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:26.040 { 00:05:26.040 "version": "SPDK v25.01-pre git sha1 dcc2ca8f3", 00:05:26.040 "fields": { 00:05:26.040 "major": 25, 00:05:26.040 "minor": 1, 00:05:26.040 "patch": 0, 00:05:26.040 "suffix": "-pre", 00:05:26.040 "commit": "dcc2ca8f3" 00:05:26.040 } 00:05:26.040 } 00:05:26.040 11:16:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:26.040 11:16:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:26.040 11:16:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:26.040 11:16:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:26.040 11:16:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:26.040 11:16:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.040 11:16:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.040 11:16:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:26.040 11:16:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:26.040 11:16:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:26.040 11:16:39 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:26.299 request: 00:05:26.299 { 00:05:26.299 "method": "env_dpdk_get_mem_stats", 00:05:26.299 "req_id": 1 00:05:26.299 } 00:05:26.299 Got JSON-RPC error response 00:05:26.299 response: 00:05:26.299 { 00:05:26.299 "code": -32601, 00:05:26.299 "message": "Method not found" 00:05:26.299 } 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.299 11:16:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2080660 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2080660 ']' 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2080660 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2080660 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2080660' 00:05:26.299 killing process with pid 2080660 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@973 -- # kill 2080660 00:05:26.299 11:16:39 app_cmdline -- common/autotest_common.sh@978 -- # wait 2080660 00:05:26.559 00:05:26.559 real 0m1.342s 00:05:26.559 user 0m1.565s 00:05:26.559 sys 0m0.446s 00:05:26.559 11:16:40 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.559 11:16:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:26.559 ************************************ 00:05:26.559 END TEST app_cmdline 00:05:26.559 ************************************ 00:05:26.559 11:16:40 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:26.559 11:16:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.559 11:16:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.559 11:16:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.559 ************************************ 00:05:26.559 START TEST version 00:05:26.559 ************************************ 00:05:26.559 11:16:40 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:26.818 * Looking for test storage... 00:05:26.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:26.819 11:16:40 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.819 11:16:40 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.819 11:16:40 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.819 11:16:40 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.819 11:16:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.819 11:16:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.819 11:16:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.819 11:16:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.819 11:16:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.819 11:16:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.819 11:16:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.819 11:16:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.819 11:16:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.819 11:16:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.819 11:16:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.819 11:16:40 version -- scripts/common.sh@344 -- # case "$op" in 00:05:26.819 11:16:40 version -- scripts/common.sh@345 -- # : 1 00:05:26.819 11:16:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.819 11:16:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.819 11:16:40 version -- scripts/common.sh@365 -- # decimal 1 00:05:26.819 11:16:40 version -- scripts/common.sh@353 -- # local d=1 00:05:26.819 11:16:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.819 11:16:40 version -- scripts/common.sh@355 -- # echo 1 00:05:26.819 11:16:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.819 11:16:40 version -- scripts/common.sh@366 -- # decimal 2 00:05:26.819 11:16:40 version -- scripts/common.sh@353 -- # local d=2 00:05:26.819 11:16:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.819 11:16:40 version -- scripts/common.sh@355 -- # echo 2 00:05:26.819 11:16:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.819 11:16:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.819 11:16:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.819 11:16:40 version -- scripts/common.sh@368 -- # return 0 00:05:26.819 11:16:40 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.819 11:16:40 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.819 --rc genhtml_branch_coverage=1 00:05:26.819 --rc genhtml_function_coverage=1 00:05:26.819 --rc genhtml_legend=1 00:05:26.819 --rc geninfo_all_blocks=1 00:05:26.819 --rc geninfo_unexecuted_blocks=1 00:05:26.819 00:05:26.819 ' 00:05:26.819 11:16:40 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.819 --rc genhtml_branch_coverage=1 00:05:26.819 --rc genhtml_function_coverage=1 00:05:26.819 --rc genhtml_legend=1 00:05:26.819 --rc geninfo_all_blocks=1 00:05:26.819 --rc geninfo_unexecuted_blocks=1 00:05:26.819 00:05:26.819 ' 00:05:26.819 11:16:40 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.819 --rc genhtml_branch_coverage=1 00:05:26.819 --rc genhtml_function_coverage=1 00:05:26.819 --rc genhtml_legend=1 00:05:26.819 --rc geninfo_all_blocks=1 00:05:26.819 --rc geninfo_unexecuted_blocks=1 00:05:26.819 00:05:26.819 ' 00:05:26.819 11:16:40 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.819 --rc genhtml_branch_coverage=1 00:05:26.819 --rc genhtml_function_coverage=1 00:05:26.819 --rc genhtml_legend=1 00:05:26.819 --rc geninfo_all_blocks=1 00:05:26.819 --rc geninfo_unexecuted_blocks=1 00:05:26.819 00:05:26.819 ' 00:05:26.819 11:16:40 version -- app/version.sh@17 -- # get_header_version major 00:05:26.819 11:16:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.819 11:16:40 version -- app/version.sh@14 -- # cut -f2 00:05:26.819 11:16:40 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.819 11:16:40 version -- app/version.sh@17 -- # major=25 00:05:26.819 11:16:40 version -- app/version.sh@18 -- # get_header_version minor 00:05:26.819 11:16:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.819 11:16:40 version -- app/version.sh@14 -- # cut -f2 00:05:26.819 11:16:40 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.819 11:16:40 version -- app/version.sh@18 -- # minor=1 00:05:26.819 11:16:40 version -- app/version.sh@19 -- # get_header_version patch 00:05:26.819 11:16:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.819 11:16:40 version -- app/version.sh@14 -- # cut -f2 00:05:26.819 11:16:40 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.819 11:16:40 version -- app/version.sh@19 -- # patch=0 00:05:26.819 11:16:40 version -- app/version.sh@20 -- # get_header_version suffix 00:05:26.819 11:16:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.819 11:16:40 version -- app/version.sh@14 -- # cut -f2 00:05:26.819 11:16:40 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.819 11:16:40 version -- app/version.sh@20 -- # suffix=-pre 00:05:26.819 11:16:40 version -- app/version.sh@22 -- # version=25.1 00:05:26.819 11:16:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:26.819 11:16:40 version -- app/version.sh@28 -- # version=25.1rc0 00:05:26.819 11:16:40 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:26.819 11:16:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:26.819 11:16:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:26.819 11:16:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:26.819 00:05:26.819 real 0m0.246s 00:05:26.819 user 0m0.148s 00:05:26.819 sys 0m0.138s 00:05:26.819 11:16:40 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.819 11:16:40 version -- common/autotest_common.sh@10 -- # set +x 00:05:26.819 ************************************ 00:05:26.819 END TEST version 00:05:26.819 ************************************ 00:05:26.819 11:16:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:26.819 11:16:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:26.819 11:16:40 -- spdk/autotest.sh@194 -- # uname -s 00:05:26.819 11:16:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:26.819 11:16:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:26.819 11:16:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:26.819 11:16:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:26.819 11:16:40 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:26.819 11:16:40 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:26.819 11:16:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.819 11:16:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.819 11:16:40 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:26.819 11:16:40 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:26.819 11:16:40 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:26.820 11:16:40 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:26.820 11:16:40 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:26.820 11:16:40 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:26.820 11:16:40 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:26.820 11:16:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:26.820 11:16:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.820 11:16:40 -- common/autotest_common.sh@10 -- # set +x 00:05:27.079 ************************************ 00:05:27.079 START TEST nvmf_tcp 00:05:27.079 ************************************ 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:27.079 * Looking for test storage... 00:05:27.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.079 11:16:40 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.079 --rc genhtml_branch_coverage=1 00:05:27.079 --rc genhtml_function_coverage=1 00:05:27.079 --rc genhtml_legend=1 00:05:27.079 --rc geninfo_all_blocks=1 00:05:27.079 --rc geninfo_unexecuted_blocks=1 00:05:27.079 00:05:27.079 ' 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.079 --rc genhtml_branch_coverage=1 00:05:27.079 --rc genhtml_function_coverage=1 00:05:27.079 --rc genhtml_legend=1 00:05:27.079 --rc geninfo_all_blocks=1 00:05:27.079 --rc geninfo_unexecuted_blocks=1 00:05:27.079 00:05:27.079 ' 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.079 --rc genhtml_branch_coverage=1 00:05:27.079 --rc genhtml_function_coverage=1 00:05:27.079 --rc genhtml_legend=1 00:05:27.079 --rc geninfo_all_blocks=1 00:05:27.079 --rc geninfo_unexecuted_blocks=1 00:05:27.079 00:05:27.079 ' 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.079 --rc genhtml_branch_coverage=1 00:05:27.079 --rc genhtml_function_coverage=1 00:05:27.079 --rc genhtml_legend=1 00:05:27.079 --rc geninfo_all_blocks=1 00:05:27.079 --rc geninfo_unexecuted_blocks=1 00:05:27.079 00:05:27.079 ' 00:05:27.079 11:16:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:27.079 11:16:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:27.079 11:16:40 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:27.079 11:16:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.080 11:16:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.080 ************************************ 00:05:27.080 START TEST nvmf_target_core 00:05:27.080 ************************************ 00:05:27.080 11:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:27.340 * Looking for test storage... 00:05:27.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:27.340 11:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.340 11:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.340 11:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.340 --rc genhtml_branch_coverage=1 00:05:27.340 --rc genhtml_function_coverage=1 00:05:27.340 --rc genhtml_legend=1 00:05:27.340 --rc geninfo_all_blocks=1 00:05:27.340 --rc geninfo_unexecuted_blocks=1 00:05:27.340 00:05:27.340 ' 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.340 --rc genhtml_branch_coverage=1 00:05:27.340 --rc genhtml_function_coverage=1 00:05:27.340 --rc genhtml_legend=1 00:05:27.340 --rc geninfo_all_blocks=1 00:05:27.340 --rc geninfo_unexecuted_blocks=1 00:05:27.340 00:05:27.340 ' 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.340 --rc genhtml_branch_coverage=1 00:05:27.340 --rc genhtml_function_coverage=1 00:05:27.340 --rc genhtml_legend=1 00:05:27.340 --rc geninfo_all_blocks=1 00:05:27.340 --rc geninfo_unexecuted_blocks=1 00:05:27.340 00:05:27.340 ' 00:05:27.340 11:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.341 --rc genhtml_branch_coverage=1 00:05:27.341 --rc genhtml_function_coverage=1 00:05:27.341 --rc genhtml_legend=1 00:05:27.341 --rc geninfo_all_blocks=1 00:05:27.341 --rc geninfo_unexecuted_blocks=1 00:05:27.341 00:05:27.341 ' 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:27.341 ************************************ 00:05:27.341 START TEST nvmf_abort 00:05:27.341 ************************************ 00:05:27.341 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:27.602 * Looking for test storage... 00:05:27.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.602 --rc genhtml_branch_coverage=1 00:05:27.602 --rc genhtml_function_coverage=1 00:05:27.602 --rc genhtml_legend=1 00:05:27.602 --rc geninfo_all_blocks=1 00:05:27.602 --rc geninfo_unexecuted_blocks=1 00:05:27.602 00:05:27.602 ' 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.602 --rc genhtml_branch_coverage=1 00:05:27.602 --rc genhtml_function_coverage=1 00:05:27.602 --rc genhtml_legend=1 00:05:27.602 --rc geninfo_all_blocks=1 00:05:27.602 --rc geninfo_unexecuted_blocks=1 00:05:27.602 00:05:27.602 ' 00:05:27.602 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.602 --rc genhtml_branch_coverage=1 00:05:27.602 --rc genhtml_function_coverage=1 00:05:27.602 --rc genhtml_legend=1 00:05:27.602 --rc geninfo_all_blocks=1 00:05:27.603 --rc geninfo_unexecuted_blocks=1 00:05:27.603 00:05:27.603 ' 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.603 --rc genhtml_branch_coverage=1 00:05:27.603 --rc genhtml_function_coverage=1 00:05:27.603 --rc genhtml_legend=1 00:05:27.603 --rc geninfo_all_blocks=1 00:05:27.603 --rc geninfo_unexecuted_blocks=1 00:05:27.603 00:05:27.603 ' 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:27.603 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:34.317 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:34.317 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.317 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:34.318 Found net devices under 0000:86:00.0: cvl_0_0 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:34.318 Found net devices under 0000:86:00.1: cvl_0_1 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:34.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:34.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:05:34.318 00:05:34.318 --- 10.0.0.2 ping statistics --- 00:05:34.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.318 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:34.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:34.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:05:34.318 00:05:34.318 --- 10.0.0.1 ping statistics --- 00:05:34.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.318 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2084339 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2084339 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2084339 ']' 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.318 11:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.318 [2024-11-19 11:16:47.424455] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:34.318 [2024-11-19 11:16:47.424507] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:34.318 [2024-11-19 11:16:47.504069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.318 [2024-11-19 11:16:47.550918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:34.318 [2024-11-19 11:16:47.550953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:34.318 [2024-11-19 11:16:47.550960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.318 [2024-11-19 11:16:47.550967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.318 [2024-11-19 11:16:47.550972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:34.318 [2024-11-19 11:16:47.552207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.318 [2024-11-19 11:16:47.552313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.318 [2024-11-19 11:16:47.552314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.600 [2024-11-19 11:16:48.311005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.600 Malloc0 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:34.600 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.601 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.601 Delay0 00:05:34.601 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.601 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:34.601 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.601 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.872 [2024-11-19 11:16:48.393177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.872 11:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:34.872 [2024-11-19 11:16:48.529693] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:36.806 Initializing NVMe Controllers 00:05:36.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:36.806 controller IO queue size 128 less than required 00:05:36.806 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:36.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:36.806 Initialization complete. Launching workers. 00:05:36.806 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36480 00:05:36.806 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36545, failed to submit 62 00:05:36.806 success 36484, unsuccessful 61, failed 0 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:36.806 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:37.066 rmmod nvme_tcp 00:05:37.066 rmmod nvme_fabrics 00:05:37.066 rmmod nvme_keyring 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2084339 ']' 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2084339 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2084339 ']' 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2084339 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2084339 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2084339' 00:05:37.066 killing process with pid 2084339 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2084339 00:05:37.066 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2084339 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:37.325 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:39.231 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:39.231 00:05:39.231 real 0m11.873s 00:05:39.231 user 0m13.587s 00:05:39.231 sys 0m5.484s 00:05:39.231 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.231 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.231 ************************************ 00:05:39.231 END TEST nvmf_abort 00:05:39.231 ************************************ 00:05:39.231 11:16:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:39.231 11:16:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:39.231 11:16:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.231 11:16:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:39.491 ************************************ 00:05:39.491 START TEST nvmf_ns_hotplug_stress 00:05:39.491 ************************************ 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:39.491 * Looking for test storage... 00:05:39.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.491 --rc genhtml_branch_coverage=1 00:05:39.491 --rc genhtml_function_coverage=1 00:05:39.491 --rc genhtml_legend=1 00:05:39.491 --rc geninfo_all_blocks=1 00:05:39.491 --rc geninfo_unexecuted_blocks=1 00:05:39.491 00:05:39.491 ' 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.491 --rc genhtml_branch_coverage=1 00:05:39.491 --rc genhtml_function_coverage=1 00:05:39.491 --rc genhtml_legend=1 00:05:39.491 --rc geninfo_all_blocks=1 00:05:39.491 --rc geninfo_unexecuted_blocks=1 00:05:39.491 00:05:39.491 ' 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.491 --rc genhtml_branch_coverage=1 00:05:39.491 --rc genhtml_function_coverage=1 00:05:39.491 --rc genhtml_legend=1 00:05:39.491 --rc geninfo_all_blocks=1 00:05:39.491 --rc geninfo_unexecuted_blocks=1 00:05:39.491 00:05:39.491 ' 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.491 --rc genhtml_branch_coverage=1 00:05:39.491 --rc genhtml_function_coverage=1 00:05:39.491 --rc genhtml_legend=1 00:05:39.491 --rc geninfo_all_blocks=1 00:05:39.491 --rc geninfo_unexecuted_blocks=1 00:05:39.491 00:05:39.491 ' 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.491 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:39.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:39.492 11:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:46.067 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:46.067 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:46.067 Found net devices under 0000:86:00.0: cvl_0_0 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:46.067 Found net devices under 0000:86:00.1: cvl_0_1 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:46.067 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:46.068 11:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:46.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:46.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:05:46.068 00:05:46.068 --- 10.0.0.2 ping statistics --- 00:05:46.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.068 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:46.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:46.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:05:46.068 00:05:46.068 --- 10.0.0.1 ping statistics --- 00:05:46.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.068 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2088578 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2088578 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2088578 ']' 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:46.068 [2024-11-19 11:16:59.338924] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:46.068 [2024-11-19 11:16:59.338975] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:46.068 [2024-11-19 11:16:59.419605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.068 [2024-11-19 11:16:59.459507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:46.068 [2024-11-19 11:16:59.459544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:46.068 [2024-11-19 11:16:59.459552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.068 [2024-11-19 11:16:59.459559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.068 [2024-11-19 11:16:59.459564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:46.068 [2024-11-19 11:16:59.460892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.068 [2024-11-19 11:16:59.460995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.068 [2024-11-19 11:16:59.460995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:46.068 [2024-11-19 11:16:59.769620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.068 11:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:46.328 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:46.587 [2024-11-19 11:17:00.179126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:46.587 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:46.846 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:46.846 Malloc0 00:05:46.846 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:47.105 Delay0 00:05:47.105 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.364 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:47.623 NULL1 00:05:47.623 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:47.882 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2088884 00:05:47.882 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:47.882 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:47.882 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.882 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.141 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:48.141 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:48.400 true 00:05:48.400 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:48.400 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.659 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.918 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:48.918 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:48.918 true 00:05:49.177 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:49.177 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.177 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.436 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:49.436 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:49.694 true 00:05:49.694 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:49.694 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.953 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.212 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:50.212 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:50.212 true 00:05:50.212 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:50.212 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.471 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.730 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:50.730 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:50.988 true 00:05:50.988 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:50.988 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.247 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.505 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:51.505 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:51.505 true 00:05:51.764 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:51.764 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.764 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.024 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:52.024 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:52.283 true 00:05:52.283 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:52.283 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.543 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.802 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:52.802 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:52.802 true 00:05:53.062 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:53.062 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.062 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.321 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:53.321 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:53.580 true 00:05:53.580 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:53.580 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.839 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.097 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:54.097 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:54.097 true 00:05:54.356 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:54.356 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.356 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.615 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:54.615 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:54.874 true 00:05:54.874 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:54.874 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.132 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.391 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:55.391 11:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:55.391 true 00:05:55.391 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:55.391 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.650 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.907 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:55.908 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:56.166 true 00:05:56.166 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:56.166 11:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.425 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.683 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:56.683 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:56.683 true 00:05:56.683 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:56.683 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.941 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.200 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:57.200 11:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:57.459 true 00:05:57.459 11:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:57.459 11:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.717 11:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.976 11:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:57.976 11:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:57.976 true 00:05:57.976 11:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:57.976 11:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.235 11:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.494 11:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:58.494 11:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:58.753 true 00:05:58.753 11:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:58.753 11:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.012 11:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.012 11:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:59.012 11:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:59.270 true 00:05:59.270 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:05:59.270 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.529 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.788 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:59.788 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:00.047 true 00:06:00.047 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:00.047 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.305 11:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.305 11:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:00.305 11:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:00.564 true 00:06:00.564 11:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:00.564 11:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.823 11:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.082 11:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:01.082 11:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:01.340 true 00:06:01.340 11:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:01.340 11:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.599 11:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.599 11:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:01.599 11:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:01.857 true 00:06:01.857 11:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:01.858 11:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.116 11:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.375 11:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:02.375 11:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:02.633 true 00:06:02.634 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:02.634 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.892 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.892 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:02.892 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:03.151 true 00:06:03.151 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:03.151 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.410 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.670 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:03.670 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:03.929 true 00:06:03.929 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:03.929 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.929 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.188 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:04.188 11:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:04.447 true 00:06:04.447 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:04.447 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.735 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.010 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:05.010 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:05.010 true 00:06:05.010 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:05.010 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.289 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.548 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:05.548 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:05.807 true 00:06:05.807 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:05.807 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.066 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.066 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:06.066 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:06.324 true 00:06:06.324 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:06.324 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.582 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.841 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:06.841 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:07.100 true 00:06:07.100 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:07.100 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.100 11:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.359 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:07.359 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:07.618 true 00:06:07.618 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:07.618 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.876 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.136 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:08.136 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:08.136 true 00:06:08.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:08.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.395 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.654 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:08.654 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:08.913 true 00:06:08.913 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:08.913 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.173 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.432 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:09.432 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:09.432 true 00:06:09.691 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:09.691 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.691 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.949 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:09.949 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:10.209 true 00:06:10.209 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:10.209 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.468 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.726 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:10.726 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:10.726 true 00:06:10.985 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:10.985 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.985 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.244 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:11.244 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:11.503 true 00:06:11.503 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:11.503 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.762 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.022 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:12.022 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:12.022 true 00:06:12.022 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:12.022 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.281 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.539 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:12.539 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:12.798 true 00:06:12.798 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:12.798 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.057 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.316 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:13.316 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:13.316 true 00:06:13.316 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:13.316 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.575 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.834 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:13.834 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:14.093 true 00:06:14.093 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:14.093 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.352 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.352 11:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:14.352 11:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:14.610 true 00:06:14.610 11:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:14.610 11:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.869 11:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.128 11:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:15.128 11:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:15.387 true 00:06:15.387 11:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:15.387 11:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.648 11:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.648 11:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:15.648 11:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:15.906 true 00:06:15.906 11:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:15.906 11:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.165 11:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.424 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:16.424 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:16.682 true 00:06:16.682 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:16.682 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.940 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.940 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:16.940 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:17.198 true 00:06:17.198 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:17.198 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.457 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.716 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:17.716 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:17.716 true 00:06:17.975 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:17.975 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.975 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.234 Initializing NVMe Controllers 00:06:18.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:18.234 Controller IO queue size 128, less than required. 00:06:18.234 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:18.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:18.234 Initialization complete. Launching workers. 00:06:18.234 ======================================================== 00:06:18.234 Latency(us) 00:06:18.234 Device Information : IOPS MiB/s Average min max 00:06:18.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26778.40 13.08 4779.74 1585.11 43983.16 00:06:18.234 ======================================================== 00:06:18.234 Total : 26778.40 13.08 4779.74 1585.11 43983.16 00:06:18.234 00:06:18.234 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:18.234 11:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:18.492 true 00:06:18.492 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2088884 00:06:18.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2088884) - No such process 00:06:18.492 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2088884 00:06:18.492 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.751 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.751 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:18.751 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:18.751 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:18.751 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.751 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:19.010 null0 00:06:19.010 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.010 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.010 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:19.269 null1 00:06:19.269 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.269 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.269 11:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:19.528 null2 00:06:19.528 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.528 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.528 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:19.528 null3 00:06:19.787 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.787 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.787 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:19.787 null4 00:06:19.788 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.788 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.788 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:20.047 null5 00:06:20.047 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.047 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.047 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:20.306 null6 00:06:20.306 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.306 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.306 11:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:20.566 null7 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2094551 2094553 2094554 2094556 2094558 2094560 2094561 2094563 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.566 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.826 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.086 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.086 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.086 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.086 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.086 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.086 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.086 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.086 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.346 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.347 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.347 11:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.607 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.866 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.867 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.867 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.867 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.867 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.867 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.867 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.126 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.385 11:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.385 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.385 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.385 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.385 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.385 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.385 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.385 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.644 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.903 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.903 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.903 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.904 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.163 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.163 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.163 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.163 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.163 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.163 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.163 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.163 11:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.423 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.683 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.683 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.683 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.683 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.683 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.683 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.683 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.683 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.683 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.683 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.683 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.941 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.941 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.941 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.941 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.941 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.941 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.941 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.942 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.201 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.202 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:24.202 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.202 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.202 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:24.461 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:24.461 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.461 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:24.461 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:24.461 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:24.461 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:24.461 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:24.461 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.720 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.720 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:24.721 rmmod nvme_tcp 00:06:24.721 rmmod nvme_fabrics 00:06:24.721 rmmod nvme_keyring 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2088578 ']' 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2088578 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2088578 ']' 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2088578 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2088578 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2088578' 00:06:24.721 killing process with pid 2088578 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2088578 00:06:24.721 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2088578 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.981 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:27.520 00:06:27.520 real 0m47.679s 00:06:27.520 user 3m22.464s 00:06:27.520 sys 0m17.324s 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:27.520 ************************************ 00:06:27.520 END TEST nvmf_ns_hotplug_stress 00:06:27.520 ************************************ 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:27.520 ************************************ 00:06:27.520 START TEST nvmf_delete_subsystem 00:06:27.520 ************************************ 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:27.520 * Looking for test storage... 00:06:27.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:27.520 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.521 --rc genhtml_branch_coverage=1 00:06:27.521 --rc genhtml_function_coverage=1 00:06:27.521 --rc genhtml_legend=1 00:06:27.521 --rc geninfo_all_blocks=1 00:06:27.521 --rc geninfo_unexecuted_blocks=1 00:06:27.521 00:06:27.521 ' 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.521 --rc genhtml_branch_coverage=1 00:06:27.521 --rc genhtml_function_coverage=1 00:06:27.521 --rc genhtml_legend=1 00:06:27.521 --rc geninfo_all_blocks=1 00:06:27.521 --rc geninfo_unexecuted_blocks=1 00:06:27.521 00:06:27.521 ' 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.521 --rc genhtml_branch_coverage=1 00:06:27.521 --rc genhtml_function_coverage=1 00:06:27.521 --rc genhtml_legend=1 00:06:27.521 --rc geninfo_all_blocks=1 00:06:27.521 --rc geninfo_unexecuted_blocks=1 00:06:27.521 00:06:27.521 ' 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.521 --rc genhtml_branch_coverage=1 00:06:27.521 --rc genhtml_function_coverage=1 00:06:27.521 --rc genhtml_legend=1 00:06:27.521 --rc geninfo_all_blocks=1 00:06:27.521 --rc geninfo_unexecuted_blocks=1 00:06:27.521 00:06:27.521 ' 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:27.521 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:27.521 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:34.095 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:34.095 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:34.095 Found net devices under 0000:86:00.0: cvl_0_0 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:34.095 Found net devices under 0000:86:00.1: cvl_0_1 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:34.095 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:34.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:06:34.096 00:06:34.096 --- 10.0.0.2 ping statistics --- 00:06:34.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.096 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:06:34.096 00:06:34.096 --- 10.0.0.1 ping statistics --- 00:06:34.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.096 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:34.096 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2098940 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2098940 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2098940 ']' 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.096 [2024-11-19 11:17:47.061957] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:34.096 [2024-11-19 11:17:47.062022] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.096 [2024-11-19 11:17:47.140132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.096 [2024-11-19 11:17:47.182277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.096 [2024-11-19 11:17:47.182313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.096 [2024-11-19 11:17:47.182321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.096 [2024-11-19 11:17:47.182327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.096 [2024-11-19 11:17:47.182332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.096 [2024-11-19 11:17:47.183496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.096 [2024-11-19 11:17:47.183496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.096 [2024-11-19 11:17:47.332063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.096 [2024-11-19 11:17:47.352277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.096 NULL1 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.096 Delay0 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2099015 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:34.096 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:34.096 [2024-11-19 11:17:47.463237] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:36.001 11:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:36.001 11:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.001 11:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.001 Read completed with error (sct=0, sc=8) 00:06:36.001 Read completed with error (sct=0, sc=8) 00:06:36.001 Read completed with error (sct=0, sc=8) 00:06:36.001 starting I/O failed: -6 00:06:36.001 Read completed with error (sct=0, sc=8) 00:06:36.001 Write completed with error (sct=0, sc=8) 00:06:36.001 Write completed with error (sct=0, sc=8) 00:06:36.001 Read completed with error (sct=0, sc=8) 00:06:36.001 starting I/O failed: -6 00:06:36.001 Read completed with error (sct=0, sc=8) 00:06:36.001 Read completed with error (sct=0, sc=8) 00:06:36.001 Read completed with error (sct=0, sc=8) 00:06:36.001 Read completed with error (sct=0, sc=8) 00:06:36.001 starting I/O failed: -6 00:06:36.001 Write completed with error (sct=0, sc=8) 00:06:36.001 Read completed with error (sct=0, sc=8) 00:06:36.001 Write completed with error (sct=0, sc=8) 00:06:36.001 Write completed with error (sct=0, sc=8) 00:06:36.001 starting I/O failed: -6 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 [2024-11-19 11:17:49.578216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e2860 is same with the state(6) to be set 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 starting I/O failed: -6 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 [2024-11-19 11:17:49.582934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc89400d350 is same with the state(6) to be set 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Read completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.002 Write completed with error (sct=0, sc=8) 00:06:36.003 Read completed with error (sct=0, sc=8) 00:06:36.003 Read completed with error (sct=0, sc=8) 00:06:36.003 Write completed with error (sct=0, sc=8) 00:06:36.003 Write completed with error (sct=0, sc=8) 00:06:36.003 Read completed with error (sct=0, sc=8) 00:06:36.003 Write completed with error (sct=0, sc=8) 00:06:36.003 Read completed with error (sct=0, sc=8) 00:06:36.003 Write completed with error (sct=0, sc=8) 00:06:36.003 Write completed with error (sct=0, sc=8) 00:06:36.003 Read completed with error (sct=0, sc=8) 00:06:36.003 Read completed with error (sct=0, sc=8) 00:06:36.003 Read completed with error (sct=0, sc=8) 00:06:36.940 [2024-11-19 11:17:50.557666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e39a0 is same with the state(6) to be set 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 [2024-11-19 11:17:50.581355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e22c0 is same with the state(6) to be set 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 [2024-11-19 11:17:50.581696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e2680 is same with the state(6) to be set 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 [2024-11-19 11:17:50.585529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc89400d020 is same with the state(6) to be set 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Write completed with error (sct=0, sc=8) 00:06:36.940 Read completed with error (sct=0, sc=8) 00:06:36.940 [2024-11-19 11:17:50.586228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc89400d680 is same with the state(6) to be set 00:06:36.940 Initializing NVMe Controllers 00:06:36.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:36.940 Controller IO queue size 128, less than required. 00:06:36.940 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:36.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:36.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:36.940 Initialization complete. Launching workers. 00:06:36.940 ======================================================== 00:06:36.940 Latency(us) 00:06:36.940 Device Information : IOPS MiB/s Average min max 00:06:36.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.81 0.08 903740.25 290.59 1006046.31 00:06:36.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.85 0.08 927818.83 228.93 1009625.25 00:06:36.940 ======================================================== 00:06:36.940 Total : 321.66 0.16 915406.81 228.93 1009625.25 00:06:36.940 00:06:36.940 [2024-11-19 11:17:50.586907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e39a0 (9): Bad file descriptor 00:06:36.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:36.940 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.940 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:36.940 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2099015 00:06:36.941 11:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2099015 00:06:37.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2099015) - No such process 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2099015 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2099015 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2099015 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.511 [2024-11-19 11:17:51.116498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2099660 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2099660 00:06:37.511 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.511 [2024-11-19 11:17:51.204580] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:38.079 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.079 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2099660 00:06:38.079 11:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:38.646 11:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.646 11:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2099660 00:06:38.646 11:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:38.904 11:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.904 11:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2099660 00:06:38.904 11:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:39.473 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:39.473 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2099660 00:06:39.473 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:40.041 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:40.041 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2099660 00:06:40.041 11:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:40.608 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:40.608 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2099660 00:06:40.608 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:40.608 Initializing NVMe Controllers 00:06:40.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:40.609 Controller IO queue size 128, less than required. 00:06:40.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:40.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:40.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:40.609 Initialization complete. Launching workers. 00:06:40.609 ======================================================== 00:06:40.609 Latency(us) 00:06:40.609 Device Information : IOPS MiB/s Average min max 00:06:40.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002058.92 1000134.70 1005648.19 00:06:40.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004052.88 1000182.33 1010277.58 00:06:40.609 ======================================================== 00:06:40.609 Total : 256.00 0.12 1003055.90 1000134.70 1010277.58 00:06:40.609 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2099660 00:06:41.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2099660) - No such process 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2099660 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:41.177 rmmod nvme_tcp 00:06:41.177 rmmod nvme_fabrics 00:06:41.177 rmmod nvme_keyring 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2098940 ']' 00:06:41.177 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2098940 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2098940 ']' 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2098940 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2098940 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2098940' 00:06:41.178 killing process with pid 2098940 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2098940 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2098940 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.178 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:43.716 00:06:43.716 real 0m16.212s 00:06:43.716 user 0m29.215s 00:06:43.716 sys 0m5.530s 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.716 ************************************ 00:06:43.716 END TEST nvmf_delete_subsystem 00:06:43.716 ************************************ 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:43.716 ************************************ 00:06:43.716 START TEST nvmf_host_management 00:06:43.716 ************************************ 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:43.716 * Looking for test storage... 00:06:43.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.716 --rc genhtml_branch_coverage=1 00:06:43.716 --rc genhtml_function_coverage=1 00:06:43.716 --rc genhtml_legend=1 00:06:43.716 --rc geninfo_all_blocks=1 00:06:43.716 --rc geninfo_unexecuted_blocks=1 00:06:43.716 00:06:43.716 ' 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.716 --rc genhtml_branch_coverage=1 00:06:43.716 --rc genhtml_function_coverage=1 00:06:43.716 --rc genhtml_legend=1 00:06:43.716 --rc geninfo_all_blocks=1 00:06:43.716 --rc geninfo_unexecuted_blocks=1 00:06:43.716 00:06:43.716 ' 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.716 --rc genhtml_branch_coverage=1 00:06:43.716 --rc genhtml_function_coverage=1 00:06:43.716 --rc genhtml_legend=1 00:06:43.716 --rc geninfo_all_blocks=1 00:06:43.716 --rc geninfo_unexecuted_blocks=1 00:06:43.716 00:06:43.716 ' 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.716 --rc genhtml_branch_coverage=1 00:06:43.716 --rc genhtml_function_coverage=1 00:06:43.716 --rc genhtml_legend=1 00:06:43.716 --rc geninfo_all_blocks=1 00:06:43.716 --rc geninfo_unexecuted_blocks=1 00:06:43.716 00:06:43.716 ' 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:43.716 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:43.717 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.288 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:50.289 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:50.289 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:50.289 Found net devices under 0000:86:00.0: cvl_0_0 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:50.289 Found net devices under 0000:86:00.1: cvl_0_1 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:50.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:06:50.289 00:06:50.289 --- 10.0.0.2 ping statistics --- 00:06:50.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.289 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:06:50.289 00:06:50.289 --- 10.0.0.1 ping statistics --- 00:06:50.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.289 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2104015 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2104015 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2104015 ']' 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.289 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.289 [2024-11-19 11:18:03.360026] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:50.289 [2024-11-19 11:18:03.360068] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.289 [2024-11-19 11:18:03.438218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.289 [2024-11-19 11:18:03.483696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.289 [2024-11-19 11:18:03.483731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.289 [2024-11-19 11:18:03.483738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.290 [2024-11-19 11:18:03.483745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.290 [2024-11-19 11:18:03.483750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.290 [2024-11-19 11:18:03.485157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.290 [2024-11-19 11:18:03.485265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.290 [2024-11-19 11:18:03.485281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.290 [2024-11-19 11:18:03.485283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.549 [2024-11-19 11:18:04.240250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.549 Malloc0 00:06:50.549 [2024-11-19 11:18:04.310457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.549 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2104238 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2104238 /var/tmp/bdevperf.sock 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2104238 ']' 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:50.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:50.809 { 00:06:50.809 "params": { 00:06:50.809 "name": "Nvme$subsystem", 00:06:50.809 "trtype": "$TEST_TRANSPORT", 00:06:50.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:50.809 "adrfam": "ipv4", 00:06:50.809 "trsvcid": "$NVMF_PORT", 00:06:50.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:50.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:50.809 "hdgst": ${hdgst:-false}, 00:06:50.809 "ddgst": ${ddgst:-false} 00:06:50.809 }, 00:06:50.809 "method": "bdev_nvme_attach_controller" 00:06:50.809 } 00:06:50.809 EOF 00:06:50.809 )") 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:50.809 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:50.809 "params": { 00:06:50.809 "name": "Nvme0", 00:06:50.809 "trtype": "tcp", 00:06:50.809 "traddr": "10.0.0.2", 00:06:50.809 "adrfam": "ipv4", 00:06:50.809 "trsvcid": "4420", 00:06:50.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:50.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:50.809 "hdgst": false, 00:06:50.809 "ddgst": false 00:06:50.809 }, 00:06:50.809 "method": "bdev_nvme_attach_controller" 00:06:50.809 }' 00:06:50.809 [2024-11-19 11:18:04.406662] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:50.809 [2024-11-19 11:18:04.406708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104238 ] 00:06:50.809 [2024-11-19 11:18:04.486055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.809 [2024-11-19 11:18:04.527813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.379 Running I/O for 10 seconds... 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=103 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 103 -ge 100 ']' 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.379 [2024-11-19 11:18:04.952599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e200 is same with the state(6) to be set 00:06:51.379 [2024-11-19 11:18:04.952665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e200 is same with the state(6) to be set 00:06:51.379 [2024-11-19 11:18:04.952674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e200 is same with the state(6) to be set 00:06:51.379 [2024-11-19 11:18:04.952681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e200 is same with the state(6) to be set 00:06:51.379 [2024-11-19 11:18:04.952687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e200 is same with the state(6) to be set 00:06:51.379 [2024-11-19 11:18:04.952693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e200 is same with the state(6) to be set 00:06:51.379 [2024-11-19 11:18:04.952700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e200 is same with the state(6) to be set 00:06:51.379 [2024-11-19 11:18:04.952706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e200 is same with the state(6) to be set 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.379 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.379 [2024-11-19 11:18:04.959885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.379 [2024-11-19 11:18:04.959915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.959924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.380 [2024-11-19 11:18:04.959931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.959939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.380 [2024-11-19 11:18:04.959946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.959960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.380 [2024-11-19 11:18:04.959968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.959975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1952500 is same with the state(6) to be set 00:06:51.380 [2024-11-19 11:18:04.960012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.380 [2024-11-19 11:18:04.960536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.380 [2024-11-19 11:18:04.960544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.960968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.381 [2024-11-19 11:18:04.960975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.381 [2024-11-19 11:18:04.961923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:51.381 task offset: 24576 on job bdev=Nvme0n1 fails 00:06:51.381 00:06:51.381 Latency(us) 00:06:51.381 [2024-11-19T10:18:05.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:51.381 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:51.381 Job: Nvme0n1 ended in about 0.11 seconds with error 00:06:51.381 Verification LBA range: start 0x0 length 0x400 00:06:51.381 Nvme0n1 : 0.11 1707.43 106.71 569.14 0.00 25932.15 1617.03 27468.13 00:06:51.381 [2024-11-19T10:18:05.162Z] =================================================================================================================== 00:06:51.381 [2024-11-19T10:18:05.162Z] Total : 1707.43 106.71 569.14 0.00 25932.15 1617.03 27468.13 00:06:51.381 [2024-11-19 11:18:04.964319] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.381 [2024-11-19 11:18:04.964341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1952500 (9): Bad file descriptor 00:06:51.381 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.381 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:51.381 [2024-11-19 11:18:04.974463] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2104238 00:06:52.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2104238) - No such process 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:52.319 { 00:06:52.319 "params": { 00:06:52.319 "name": "Nvme$subsystem", 00:06:52.319 "trtype": "$TEST_TRANSPORT", 00:06:52.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:52.319 "adrfam": "ipv4", 00:06:52.319 "trsvcid": "$NVMF_PORT", 00:06:52.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:52.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:52.319 "hdgst": ${hdgst:-false}, 00:06:52.319 "ddgst": ${ddgst:-false} 00:06:52.319 }, 00:06:52.319 "method": "bdev_nvme_attach_controller" 00:06:52.319 } 00:06:52.319 EOF 00:06:52.319 )") 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:52.319 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:52.319 "params": { 00:06:52.319 "name": "Nvme0", 00:06:52.319 "trtype": "tcp", 00:06:52.319 "traddr": "10.0.0.2", 00:06:52.319 "adrfam": "ipv4", 00:06:52.319 "trsvcid": "4420", 00:06:52.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:52.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:52.319 "hdgst": false, 00:06:52.319 "ddgst": false 00:06:52.319 }, 00:06:52.319 "method": "bdev_nvme_attach_controller" 00:06:52.319 }' 00:06:52.319 [2024-11-19 11:18:06.019144] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:52.319 [2024-11-19 11:18:06.019191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104699 ] 00:06:52.319 [2024-11-19 11:18:06.093089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.578 [2024-11-19 11:18:06.134822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.837 Running I/O for 1 seconds... 00:06:53.775 1984.00 IOPS, 124.00 MiB/s 00:06:53.775 Latency(us) 00:06:53.775 [2024-11-19T10:18:07.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.775 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:53.775 Verification LBA range: start 0x0 length 0x400 00:06:53.775 Nvme0n1 : 1.02 2013.98 125.87 0.00 0.00 31273.97 4388.06 27468.13 00:06:53.775 [2024-11-19T10:18:07.556Z] =================================================================================================================== 00:06:53.775 [2024-11-19T10:18:07.556Z] Total : 2013.98 125.87 0.00 0.00 31273.97 4388.06 27468.13 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:54.034 rmmod nvme_tcp 00:06:54.034 rmmod nvme_fabrics 00:06:54.034 rmmod nvme_keyring 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2104015 ']' 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2104015 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2104015 ']' 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2104015 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2104015 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2104015' 00:06:54.034 killing process with pid 2104015 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2104015 00:06:54.034 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2104015 00:06:54.293 [2024-11-19 11:18:07.921473] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:54.293 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:56.830 00:06:56.830 real 0m12.937s 00:06:56.830 user 0m21.779s 00:06:56.830 sys 0m5.581s 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.830 ************************************ 00:06:56.830 END TEST nvmf_host_management 00:06:56.830 ************************************ 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:56.830 ************************************ 00:06:56.830 START TEST nvmf_lvol 00:06:56.830 ************************************ 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:56.830 * Looking for test storage... 00:06:56.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.830 --rc genhtml_branch_coverage=1 00:06:56.830 --rc genhtml_function_coverage=1 00:06:56.830 --rc genhtml_legend=1 00:06:56.830 --rc geninfo_all_blocks=1 00:06:56.830 --rc geninfo_unexecuted_blocks=1 00:06:56.830 00:06:56.830 ' 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.830 --rc genhtml_branch_coverage=1 00:06:56.830 --rc genhtml_function_coverage=1 00:06:56.830 --rc genhtml_legend=1 00:06:56.830 --rc geninfo_all_blocks=1 00:06:56.830 --rc geninfo_unexecuted_blocks=1 00:06:56.830 00:06:56.830 ' 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.830 --rc genhtml_branch_coverage=1 00:06:56.830 --rc genhtml_function_coverage=1 00:06:56.830 --rc genhtml_legend=1 00:06:56.830 --rc geninfo_all_blocks=1 00:06:56.830 --rc geninfo_unexecuted_blocks=1 00:06:56.830 00:06:56.830 ' 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.830 --rc genhtml_branch_coverage=1 00:06:56.830 --rc genhtml_function_coverage=1 00:06:56.830 --rc genhtml_legend=1 00:06:56.830 --rc geninfo_all_blocks=1 00:06:56.830 --rc geninfo_unexecuted_blocks=1 00:06:56.830 00:06:56.830 ' 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.830 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:56.831 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:02.256 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:02.256 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:02.256 Found net devices under 0000:86:00.0: cvl_0_0 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:02.256 Found net devices under 0000:86:00.1: cvl_0_1 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.256 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:02.516 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:02.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:07:02.516 00:07:02.517 --- 10.0.0.2 ping statistics --- 00:07:02.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.517 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:07:02.517 00:07:02.517 --- 10.0.0.1 ping statistics --- 00:07:02.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.517 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:02.517 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.776 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:02.776 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2108705 00:07:02.776 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:02.776 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2108705 00:07:02.776 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2108705 ']' 00:07:02.776 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.776 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.776 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.776 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.776 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:02.776 [2024-11-19 11:18:16.351910] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:02.776 [2024-11-19 11:18:16.351964] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.776 [2024-11-19 11:18:16.428905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.776 [2024-11-19 11:18:16.469186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.776 [2024-11-19 11:18:16.469224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.776 [2024-11-19 11:18:16.469231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.776 [2024-11-19 11:18:16.469237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.776 [2024-11-19 11:18:16.469242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.776 [2024-11-19 11:18:16.470640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.776 [2024-11-19 11:18:16.470747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.776 [2024-11-19 11:18:16.470748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.035 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.035 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:03.035 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:03.035 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:03.035 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:03.035 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.035 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:03.035 [2024-11-19 11:18:16.783518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.294 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:03.294 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:03.294 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:03.553 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:03.553 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:03.811 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:04.071 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=230341f6-54a6-4a8e-88c8-1ad5c451142b 00:07:04.071 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 230341f6-54a6-4a8e-88c8-1ad5c451142b lvol 20 00:07:04.330 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=80f3af19-0856-4086-b737-110dfe7fd42c 00:07:04.330 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:04.330 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80f3af19-0856-4086-b737-110dfe7fd42c 00:07:04.589 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:04.847 [2024-11-19 11:18:18.481908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.847 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.106 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2109193 00:07:05.106 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:05.106 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:06.041 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 80f3af19-0856-4086-b737-110dfe7fd42c MY_SNAPSHOT 00:07:06.300 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6ec937db-884b-4883-92c3-b36bab135328 00:07:06.300 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 80f3af19-0856-4086-b737-110dfe7fd42c 30 00:07:06.559 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6ec937db-884b-4883-92c3-b36bab135328 MY_CLONE 00:07:06.818 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2a932d27-5750-4edc-acf0-f32625199f82 00:07:06.818 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2a932d27-5750-4edc-acf0-f32625199f82 00:07:07.388 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2109193 00:07:15.509 Initializing NVMe Controllers 00:07:15.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:15.509 Controller IO queue size 128, less than required. 00:07:15.509 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:15.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:15.509 Initialization complete. Launching workers. 00:07:15.509 ======================================================== 00:07:15.509 Latency(us) 00:07:15.509 Device Information : IOPS MiB/s Average min max 00:07:15.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12035.40 47.01 10634.59 259.24 61557.33 00:07:15.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11990.80 46.84 10675.51 3510.75 63477.48 00:07:15.509 ======================================================== 00:07:15.509 Total : 24026.20 93.85 10655.01 259.24 63477.48 00:07:15.509 00:07:15.509 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:15.768 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80f3af19-0856-4086-b737-110dfe7fd42c 00:07:15.768 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 230341f6-54a6-4a8e-88c8-1ad5c451142b 00:07:16.028 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:16.028 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:16.028 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:16.028 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:16.028 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:16.028 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.028 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:16.028 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.028 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.028 rmmod nvme_tcp 00:07:16.028 rmmod nvme_fabrics 00:07:16.028 rmmod nvme_keyring 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2108705 ']' 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2108705 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2108705 ']' 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2108705 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2108705 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2108705' 00:07:16.287 killing process with pid 2108705 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2108705 00:07:16.287 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2108705 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.547 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.455 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.455 00:07:18.455 real 0m22.052s 00:07:18.455 user 1m3.441s 00:07:18.455 sys 0m7.681s 00:07:18.455 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.455 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:18.455 ************************************ 00:07:18.455 END TEST nvmf_lvol 00:07:18.455 ************************************ 00:07:18.455 11:18:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:18.455 11:18:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.455 11:18:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.456 11:18:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.456 ************************************ 00:07:18.456 START TEST nvmf_lvs_grow 00:07:18.456 ************************************ 00:07:18.456 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:18.715 * Looking for test storage... 00:07:18.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:18.715 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.716 --rc genhtml_branch_coverage=1 00:07:18.716 --rc genhtml_function_coverage=1 00:07:18.716 --rc genhtml_legend=1 00:07:18.716 --rc geninfo_all_blocks=1 00:07:18.716 --rc geninfo_unexecuted_blocks=1 00:07:18.716 00:07:18.716 ' 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.716 --rc genhtml_branch_coverage=1 00:07:18.716 --rc genhtml_function_coverage=1 00:07:18.716 --rc genhtml_legend=1 00:07:18.716 --rc geninfo_all_blocks=1 00:07:18.716 --rc geninfo_unexecuted_blocks=1 00:07:18.716 00:07:18.716 ' 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.716 --rc genhtml_branch_coverage=1 00:07:18.716 --rc genhtml_function_coverage=1 00:07:18.716 --rc genhtml_legend=1 00:07:18.716 --rc geninfo_all_blocks=1 00:07:18.716 --rc geninfo_unexecuted_blocks=1 00:07:18.716 00:07:18.716 ' 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.716 --rc genhtml_branch_coverage=1 00:07:18.716 --rc genhtml_function_coverage=1 00:07:18.716 --rc genhtml_legend=1 00:07:18.716 --rc geninfo_all_blocks=1 00:07:18.716 --rc geninfo_unexecuted_blocks=1 00:07:18.716 00:07:18.716 ' 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.716 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:25.290 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:25.290 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.290 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:25.291 Found net devices under 0000:86:00.0: cvl_0_0 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:25.291 Found net devices under 0000:86:00.1: cvl_0_1 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:25.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:07:25.291 00:07:25.291 --- 10.0.0.2 ping statistics --- 00:07:25.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.291 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:07:25.291 00:07:25.291 --- 10.0.0.1 ping statistics --- 00:07:25.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.291 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2114580 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2114580 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2114580 ']' 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.291 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.291 [2024-11-19 11:18:38.521348] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:25.291 [2024-11-19 11:18:38.521401] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.291 [2024-11-19 11:18:38.599965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.291 [2024-11-19 11:18:38.640455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.291 [2024-11-19 11:18:38.640491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.291 [2024-11-19 11:18:38.640499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.291 [2024-11-19 11:18:38.640505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.291 [2024-11-19 11:18:38.640509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.291 [2024-11-19 11:18:38.641075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.859 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.859 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:25.859 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.859 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.859 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.859 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.859 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:25.859 [2024-11-19 11:18:39.558850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.859 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:25.859 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.860 ************************************ 00:07:25.860 START TEST lvs_grow_clean 00:07:25.860 ************************************ 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.860 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.119 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:26.119 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:26.379 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:26.379 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:26.379 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:26.638 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:26.638 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:26.638 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f lvol 150 00:07:26.897 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=76c6c581-56df-4ebc-aeef-0073b30e91a5 00:07:26.897 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.897 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:26.898 [2024-11-19 11:18:40.626002] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:26.898 [2024-11-19 11:18:40.626064] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:26.898 true 00:07:26.898 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:26.898 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:27.157 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:27.157 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:27.417 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 76c6c581-56df-4ebc-aeef-0073b30e91a5 00:07:27.676 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:27.676 [2024-11-19 11:18:41.364212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.676 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.936 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2115091 00:07:27.936 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:27.936 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:27.936 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2115091 /var/tmp/bdevperf.sock 00:07:27.936 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2115091 ']' 00:07:27.936 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:27.936 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.936 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:27.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:27.936 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.936 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:27.936 [2024-11-19 11:18:41.627207] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:27.936 [2024-11-19 11:18:41.627253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115091 ] 00:07:27.936 [2024-11-19 11:18:41.704196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.195 [2024-11-19 11:18:41.747240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.195 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.195 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:28.195 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:28.453 Nvme0n1 00:07:28.453 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:28.712 [ 00:07:28.712 { 00:07:28.712 "name": "Nvme0n1", 00:07:28.712 "aliases": [ 00:07:28.712 "76c6c581-56df-4ebc-aeef-0073b30e91a5" 00:07:28.712 ], 00:07:28.712 "product_name": "NVMe disk", 00:07:28.712 "block_size": 4096, 00:07:28.712 "num_blocks": 38912, 00:07:28.712 "uuid": "76c6c581-56df-4ebc-aeef-0073b30e91a5", 00:07:28.712 "numa_id": 1, 00:07:28.712 "assigned_rate_limits": { 00:07:28.712 "rw_ios_per_sec": 0, 00:07:28.712 "rw_mbytes_per_sec": 0, 00:07:28.712 "r_mbytes_per_sec": 0, 00:07:28.712 "w_mbytes_per_sec": 0 00:07:28.712 }, 00:07:28.712 "claimed": false, 00:07:28.712 "zoned": false, 00:07:28.712 "supported_io_types": { 00:07:28.712 "read": true, 00:07:28.712 "write": true, 00:07:28.712 "unmap": true, 00:07:28.712 "flush": true, 00:07:28.712 "reset": true, 00:07:28.712 "nvme_admin": true, 00:07:28.712 "nvme_io": true, 00:07:28.712 "nvme_io_md": false, 00:07:28.712 "write_zeroes": true, 00:07:28.712 "zcopy": false, 00:07:28.712 "get_zone_info": false, 00:07:28.712 "zone_management": false, 00:07:28.712 "zone_append": false, 00:07:28.712 "compare": true, 00:07:28.712 "compare_and_write": true, 00:07:28.712 "abort": true, 00:07:28.712 "seek_hole": false, 00:07:28.712 "seek_data": false, 00:07:28.712 "copy": true, 00:07:28.712 "nvme_iov_md": false 00:07:28.712 }, 00:07:28.712 "memory_domains": [ 00:07:28.712 { 00:07:28.712 "dma_device_id": "system", 00:07:28.712 "dma_device_type": 1 00:07:28.712 } 00:07:28.712 ], 00:07:28.712 "driver_specific": { 00:07:28.712 "nvme": [ 00:07:28.712 { 00:07:28.712 "trid": { 00:07:28.712 "trtype": "TCP", 00:07:28.712 "adrfam": "IPv4", 00:07:28.712 "traddr": "10.0.0.2", 00:07:28.712 "trsvcid": "4420", 00:07:28.712 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:28.712 }, 00:07:28.712 "ctrlr_data": { 00:07:28.712 "cntlid": 1, 00:07:28.712 "vendor_id": "0x8086", 00:07:28.712 "model_number": "SPDK bdev Controller", 00:07:28.712 "serial_number": "SPDK0", 00:07:28.712 "firmware_revision": "25.01", 00:07:28.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:28.712 "oacs": { 00:07:28.712 "security": 0, 00:07:28.712 "format": 0, 00:07:28.712 "firmware": 0, 00:07:28.712 "ns_manage": 0 00:07:28.712 }, 00:07:28.712 "multi_ctrlr": true, 00:07:28.712 "ana_reporting": false 00:07:28.712 }, 00:07:28.712 "vs": { 00:07:28.712 "nvme_version": "1.3" 00:07:28.712 }, 00:07:28.712 "ns_data": { 00:07:28.712 "id": 1, 00:07:28.712 "can_share": true 00:07:28.712 } 00:07:28.712 } 00:07:28.712 ], 00:07:28.712 "mp_policy": "active_passive" 00:07:28.712 } 00:07:28.712 } 00:07:28.712 ] 00:07:28.712 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2115313 00:07:28.712 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:28.712 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:28.712 Running I/O for 10 seconds... 00:07:29.649 Latency(us) 00:07:29.649 [2024-11-19T10:18:43.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.649 Nvme0n1 : 1.00 22737.00 88.82 0.00 0.00 0.00 0.00 0.00 00:07:29.649 [2024-11-19T10:18:43.430Z] =================================================================================================================== 00:07:29.649 [2024-11-19T10:18:43.430Z] Total : 22737.00 88.82 0.00 0.00 0.00 0.00 0.00 00:07:29.649 00:07:30.586 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:30.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.845 Nvme0n1 : 2.00 22897.50 89.44 0.00 0.00 0.00 0.00 0.00 00:07:30.845 [2024-11-19T10:18:44.627Z] =================================================================================================================== 00:07:30.846 [2024-11-19T10:18:44.627Z] Total : 22897.50 89.44 0.00 0.00 0.00 0.00 0.00 00:07:30.846 00:07:30.846 true 00:07:30.846 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:30.846 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:31.105 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:31.105 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:31.105 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2115313 00:07:31.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.672 Nvme0n1 : 3.00 22955.00 89.67 0.00 0.00 0.00 0.00 0.00 00:07:31.672 [2024-11-19T10:18:45.453Z] =================================================================================================================== 00:07:31.672 [2024-11-19T10:18:45.453Z] Total : 22955.00 89.67 0.00 0.00 0.00 0.00 0.00 00:07:31.672 00:07:33.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.052 Nvme0n1 : 4.00 23026.75 89.95 0.00 0.00 0.00 0.00 0.00 00:07:33.052 [2024-11-19T10:18:46.833Z] =================================================================================================================== 00:07:33.052 [2024-11-19T10:18:46.833Z] Total : 23026.75 89.95 0.00 0.00 0.00 0.00 0.00 00:07:33.052 00:07:33.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.621 Nvme0n1 : 5.00 23082.40 90.17 0.00 0.00 0.00 0.00 0.00 00:07:33.621 [2024-11-19T10:18:47.402Z] =================================================================================================================== 00:07:33.621 [2024-11-19T10:18:47.402Z] Total : 23082.40 90.17 0.00 0.00 0.00 0.00 0.00 00:07:33.621 00:07:35.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.001 Nvme0n1 : 6.00 23112.67 90.28 0.00 0.00 0.00 0.00 0.00 00:07:35.001 [2024-11-19T10:18:48.782Z] =================================================================================================================== 00:07:35.001 [2024-11-19T10:18:48.782Z] Total : 23112.67 90.28 0.00 0.00 0.00 0.00 0.00 00:07:35.001 00:07:35.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.940 Nvme0n1 : 7.00 23141.86 90.40 0.00 0.00 0.00 0.00 0.00 00:07:35.940 [2024-11-19T10:18:49.721Z] =================================================================================================================== 00:07:35.940 [2024-11-19T10:18:49.721Z] Total : 23141.86 90.40 0.00 0.00 0.00 0.00 0.00 00:07:35.940 00:07:36.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.879 Nvme0n1 : 8.00 23121.25 90.32 0.00 0.00 0.00 0.00 0.00 00:07:36.879 [2024-11-19T10:18:50.660Z] =================================================================================================================== 00:07:36.879 [2024-11-19T10:18:50.660Z] Total : 23121.25 90.32 0.00 0.00 0.00 0.00 0.00 00:07:36.879 00:07:37.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.818 Nvme0n1 : 9.00 23138.56 90.38 0.00 0.00 0.00 0.00 0.00 00:07:37.818 [2024-11-19T10:18:51.599Z] =================================================================================================================== 00:07:37.818 [2024-11-19T10:18:51.599Z] Total : 23138.56 90.38 0.00 0.00 0.00 0.00 0.00 00:07:37.818 00:07:38.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.756 Nvme0n1 : 10.00 23155.60 90.45 0.00 0.00 0.00 0.00 0.00 00:07:38.756 [2024-11-19T10:18:52.537Z] =================================================================================================================== 00:07:38.756 [2024-11-19T10:18:52.537Z] Total : 23155.60 90.45 0.00 0.00 0.00 0.00 0.00 00:07:38.756 00:07:38.756 00:07:38.756 Latency(us) 00:07:38.756 [2024-11-19T10:18:52.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.756 Nvme0n1 : 10.01 23155.62 90.45 0.00 0.00 5524.87 2649.93 11283.59 00:07:38.756 [2024-11-19T10:18:52.537Z] =================================================================================================================== 00:07:38.756 [2024-11-19T10:18:52.537Z] Total : 23155.62 90.45 0.00 0.00 5524.87 2649.93 11283.59 00:07:38.756 { 00:07:38.756 "results": [ 00:07:38.756 { 00:07:38.756 "job": "Nvme0n1", 00:07:38.756 "core_mask": "0x2", 00:07:38.756 "workload": "randwrite", 00:07:38.756 "status": "finished", 00:07:38.756 "queue_depth": 128, 00:07:38.756 "io_size": 4096, 00:07:38.756 "runtime": 10.005521, 00:07:38.756 "iops": 23155.615784525362, 00:07:38.756 "mibps": 90.4516241583022, 00:07:38.756 "io_failed": 0, 00:07:38.756 "io_timeout": 0, 00:07:38.756 "avg_latency_us": 5524.874135880731, 00:07:38.756 "min_latency_us": 2649.9339130434782, 00:07:38.756 "max_latency_us": 11283.589565217391 00:07:38.756 } 00:07:38.756 ], 00:07:38.756 "core_count": 1 00:07:38.756 } 00:07:38.756 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2115091 00:07:38.756 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2115091 ']' 00:07:38.756 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2115091 00:07:38.756 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:38.756 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.756 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2115091 00:07:38.756 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:38.756 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:38.756 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2115091' 00:07:38.756 killing process with pid 2115091 00:07:38.756 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2115091 00:07:38.756 Received shutdown signal, test time was about 10.000000 seconds 00:07:38.756 00:07:38.756 Latency(us) 00:07:38.756 [2024-11-19T10:18:52.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.756 [2024-11-19T10:18:52.537Z] =================================================================================================================== 00:07:38.756 [2024-11-19T10:18:52.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:38.756 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2115091 00:07:39.015 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.277 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:39.536 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:39.536 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:39.536 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:39.536 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:39.536 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:39.796 [2024-11-19 11:18:53.444410] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:39.796 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:40.056 request: 00:07:40.056 { 00:07:40.056 "uuid": "ae192a2c-a6f9-4da1-a36f-2822db82ec1f", 00:07:40.056 "method": "bdev_lvol_get_lvstores", 00:07:40.056 "req_id": 1 00:07:40.056 } 00:07:40.056 Got JSON-RPC error response 00:07:40.056 response: 00:07:40.056 { 00:07:40.056 "code": -19, 00:07:40.056 "message": "No such device" 00:07:40.056 } 00:07:40.056 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:40.056 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:40.056 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:40.056 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:40.056 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:40.316 aio_bdev 00:07:40.316 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 76c6c581-56df-4ebc-aeef-0073b30e91a5 00:07:40.316 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=76c6c581-56df-4ebc-aeef-0073b30e91a5 00:07:40.316 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:40.316 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:40.316 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:40.316 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:40.316 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:40.316 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 76c6c581-56df-4ebc-aeef-0073b30e91a5 -t 2000 00:07:40.575 [ 00:07:40.575 { 00:07:40.575 "name": "76c6c581-56df-4ebc-aeef-0073b30e91a5", 00:07:40.575 "aliases": [ 00:07:40.575 "lvs/lvol" 00:07:40.575 ], 00:07:40.576 "product_name": "Logical Volume", 00:07:40.576 "block_size": 4096, 00:07:40.576 "num_blocks": 38912, 00:07:40.576 "uuid": "76c6c581-56df-4ebc-aeef-0073b30e91a5", 00:07:40.576 "assigned_rate_limits": { 00:07:40.576 "rw_ios_per_sec": 0, 00:07:40.576 "rw_mbytes_per_sec": 0, 00:07:40.576 "r_mbytes_per_sec": 0, 00:07:40.576 "w_mbytes_per_sec": 0 00:07:40.576 }, 00:07:40.576 "claimed": false, 00:07:40.576 "zoned": false, 00:07:40.576 "supported_io_types": { 00:07:40.576 "read": true, 00:07:40.576 "write": true, 00:07:40.576 "unmap": true, 00:07:40.576 "flush": false, 00:07:40.576 "reset": true, 00:07:40.576 "nvme_admin": false, 00:07:40.576 "nvme_io": false, 00:07:40.576 "nvme_io_md": false, 00:07:40.576 "write_zeroes": true, 00:07:40.576 "zcopy": false, 00:07:40.576 "get_zone_info": false, 00:07:40.576 "zone_management": false, 00:07:40.576 "zone_append": false, 00:07:40.576 "compare": false, 00:07:40.576 "compare_and_write": false, 00:07:40.576 "abort": false, 00:07:40.576 "seek_hole": true, 00:07:40.576 "seek_data": true, 00:07:40.576 "copy": false, 00:07:40.576 "nvme_iov_md": false 00:07:40.576 }, 00:07:40.576 "driver_specific": { 00:07:40.576 "lvol": { 00:07:40.576 "lvol_store_uuid": "ae192a2c-a6f9-4da1-a36f-2822db82ec1f", 00:07:40.576 "base_bdev": "aio_bdev", 00:07:40.576 "thin_provision": false, 00:07:40.576 "num_allocated_clusters": 38, 00:07:40.576 "snapshot": false, 00:07:40.576 "clone": false, 00:07:40.576 "esnap_clone": false 00:07:40.576 } 00:07:40.576 } 00:07:40.576 } 00:07:40.576 ] 00:07:40.576 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:40.576 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:40.576 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:40.835 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:40.835 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:40.835 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:40.835 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:40.835 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 76c6c581-56df-4ebc-aeef-0073b30e91a5 00:07:41.094 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae192a2c-a6f9-4da1-a36f-2822db82ec1f 00:07:41.354 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.612 00:07:41.612 real 0m15.589s 00:07:41.612 user 0m15.127s 00:07:41.612 sys 0m1.534s 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:41.612 ************************************ 00:07:41.612 END TEST lvs_grow_clean 00:07:41.612 ************************************ 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.612 ************************************ 00:07:41.612 START TEST lvs_grow_dirty 00:07:41.612 ************************************ 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.612 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:41.872 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:41.872 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:42.131 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:42.131 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:42.131 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:42.131 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:42.131 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:42.131 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 lvol 150 00:07:42.391 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=32b12022-fc7c-4778-9778-756bad03d10d 00:07:42.391 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.391 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:42.650 [2024-11-19 11:18:56.269903] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:42.650 [2024-11-19 11:18:56.269957] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:42.650 true 00:07:42.650 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:42.650 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:42.909 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:42.909 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:42.909 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 32b12022-fc7c-4778-9778-756bad03d10d 00:07:43.169 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:43.428 [2024-11-19 11:18:57.036285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.428 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2117841 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2117841 /var/tmp/bdevperf.sock 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2117841 ']' 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:43.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.688 [2024-11-19 11:18:57.256937] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:43.688 [2024-11-19 11:18:57.256989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117841 ] 00:07:43.688 [2024-11-19 11:18:57.333797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.688 [2024-11-19 11:18:57.376804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:43.688 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:44.257 Nvme0n1 00:07:44.257 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:44.257 [ 00:07:44.257 { 00:07:44.257 "name": "Nvme0n1", 00:07:44.257 "aliases": [ 00:07:44.257 "32b12022-fc7c-4778-9778-756bad03d10d" 00:07:44.257 ], 00:07:44.257 "product_name": "NVMe disk", 00:07:44.257 "block_size": 4096, 00:07:44.257 "num_blocks": 38912, 00:07:44.257 "uuid": "32b12022-fc7c-4778-9778-756bad03d10d", 00:07:44.257 "numa_id": 1, 00:07:44.257 "assigned_rate_limits": { 00:07:44.257 "rw_ios_per_sec": 0, 00:07:44.257 "rw_mbytes_per_sec": 0, 00:07:44.257 "r_mbytes_per_sec": 0, 00:07:44.257 "w_mbytes_per_sec": 0 00:07:44.257 }, 00:07:44.257 "claimed": false, 00:07:44.257 "zoned": false, 00:07:44.257 "supported_io_types": { 00:07:44.257 "read": true, 00:07:44.257 "write": true, 00:07:44.257 "unmap": true, 00:07:44.257 "flush": true, 00:07:44.257 "reset": true, 00:07:44.257 "nvme_admin": true, 00:07:44.257 "nvme_io": true, 00:07:44.257 "nvme_io_md": false, 00:07:44.257 "write_zeroes": true, 00:07:44.257 "zcopy": false, 00:07:44.257 "get_zone_info": false, 00:07:44.257 "zone_management": false, 00:07:44.257 "zone_append": false, 00:07:44.257 "compare": true, 00:07:44.257 "compare_and_write": true, 00:07:44.257 "abort": true, 00:07:44.257 "seek_hole": false, 00:07:44.257 "seek_data": false, 00:07:44.257 "copy": true, 00:07:44.257 "nvme_iov_md": false 00:07:44.257 }, 00:07:44.257 "memory_domains": [ 00:07:44.257 { 00:07:44.257 "dma_device_id": "system", 00:07:44.257 "dma_device_type": 1 00:07:44.257 } 00:07:44.257 ], 00:07:44.257 "driver_specific": { 00:07:44.257 "nvme": [ 00:07:44.257 { 00:07:44.257 "trid": { 00:07:44.257 "trtype": "TCP", 00:07:44.257 "adrfam": "IPv4", 00:07:44.257 "traddr": "10.0.0.2", 00:07:44.257 "trsvcid": "4420", 00:07:44.257 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:44.257 }, 00:07:44.257 "ctrlr_data": { 00:07:44.257 "cntlid": 1, 00:07:44.257 "vendor_id": "0x8086", 00:07:44.257 "model_number": "SPDK bdev Controller", 00:07:44.257 "serial_number": "SPDK0", 00:07:44.257 "firmware_revision": "25.01", 00:07:44.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:44.257 "oacs": { 00:07:44.257 "security": 0, 00:07:44.257 "format": 0, 00:07:44.257 "firmware": 0, 00:07:44.257 "ns_manage": 0 00:07:44.257 }, 00:07:44.257 "multi_ctrlr": true, 00:07:44.257 "ana_reporting": false 00:07:44.257 }, 00:07:44.257 "vs": { 00:07:44.257 "nvme_version": "1.3" 00:07:44.257 }, 00:07:44.257 "ns_data": { 00:07:44.257 "id": 1, 00:07:44.257 "can_share": true 00:07:44.257 } 00:07:44.257 } 00:07:44.257 ], 00:07:44.257 "mp_policy": "active_passive" 00:07:44.257 } 00:07:44.257 } 00:07:44.257 ] 00:07:44.517 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2117919 00:07:44.517 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:44.517 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:44.517 Running I/O for 10 seconds... 00:07:45.456 Latency(us) 00:07:45.456 [2024-11-19T10:18:59.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.456 Nvme0n1 : 1.00 22825.00 89.16 0.00 0.00 0.00 0.00 0.00 00:07:45.456 [2024-11-19T10:18:59.237Z] =================================================================================================================== 00:07:45.456 [2024-11-19T10:18:59.237Z] Total : 22825.00 89.16 0.00 0.00 0.00 0.00 0.00 00:07:45.456 00:07:46.393 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:46.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.393 Nvme0n1 : 2.00 22868.50 89.33 0.00 0.00 0.00 0.00 0.00 00:07:46.393 [2024-11-19T10:19:00.174Z] =================================================================================================================== 00:07:46.393 [2024-11-19T10:19:00.174Z] Total : 22868.50 89.33 0.00 0.00 0.00 0.00 0.00 00:07:46.393 00:07:46.652 true 00:07:46.652 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:46.652 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:46.912 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:46.912 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:46.912 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2117919 00:07:47.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.481 Nvme0n1 : 3.00 22952.67 89.66 0.00 0.00 0.00 0.00 0.00 00:07:47.481 [2024-11-19T10:19:01.262Z] =================================================================================================================== 00:07:47.481 [2024-11-19T10:19:01.262Z] Total : 22952.67 89.66 0.00 0.00 0.00 0.00 0.00 00:07:47.481 00:07:48.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.428 Nvme0n1 : 4.00 23024.75 89.94 0.00 0.00 0.00 0.00 0.00 00:07:48.428 [2024-11-19T10:19:02.209Z] =================================================================================================================== 00:07:48.428 [2024-11-19T10:19:02.209Z] Total : 23024.75 89.94 0.00 0.00 0.00 0.00 0.00 00:07:48.428 00:07:49.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.532 Nvme0n1 : 5.00 23060.40 90.08 0.00 0.00 0.00 0.00 0.00 00:07:49.532 [2024-11-19T10:19:03.313Z] =================================================================================================================== 00:07:49.532 [2024-11-19T10:19:03.313Z] Total : 23060.40 90.08 0.00 0.00 0.00 0.00 0.00 00:07:49.532 00:07:50.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.471 Nvme0n1 : 6.00 23112.17 90.28 0.00 0.00 0.00 0.00 0.00 00:07:50.471 [2024-11-19T10:19:04.252Z] =================================================================================================================== 00:07:50.471 [2024-11-19T10:19:04.252Z] Total : 23112.17 90.28 0.00 0.00 0.00 0.00 0.00 00:07:50.471 00:07:51.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.408 Nvme0n1 : 7.00 23130.86 90.35 0.00 0.00 0.00 0.00 0.00 00:07:51.408 [2024-11-19T10:19:05.189Z] =================================================================================================================== 00:07:51.408 [2024-11-19T10:19:05.189Z] Total : 23130.86 90.35 0.00 0.00 0.00 0.00 0.00 00:07:51.408 00:07:52.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.786 Nvme0n1 : 8.00 23156.12 90.45 0.00 0.00 0.00 0.00 0.00 00:07:52.786 [2024-11-19T10:19:06.567Z] =================================================================================================================== 00:07:52.786 [2024-11-19T10:19:06.567Z] Total : 23156.12 90.45 0.00 0.00 0.00 0.00 0.00 00:07:52.786 00:07:53.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.722 Nvme0n1 : 9.00 23180.00 90.55 0.00 0.00 0.00 0.00 0.00 00:07:53.722 [2024-11-19T10:19:07.503Z] =================================================================================================================== 00:07:53.722 [2024-11-19T10:19:07.503Z] Total : 23180.00 90.55 0.00 0.00 0.00 0.00 0.00 00:07:53.722 00:07:54.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.661 Nvme0n1 : 10.00 23199.70 90.62 0.00 0.00 0.00 0.00 0.00 00:07:54.661 [2024-11-19T10:19:08.442Z] =================================================================================================================== 00:07:54.661 [2024-11-19T10:19:08.442Z] Total : 23199.70 90.62 0.00 0.00 0.00 0.00 0.00 00:07:54.661 00:07:54.661 00:07:54.661 Latency(us) 00:07:54.661 [2024-11-19T10:19:08.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.661 Nvme0n1 : 10.00 23207.10 90.65 0.00 0.00 5512.65 3262.55 11397.57 00:07:54.661 [2024-11-19T10:19:08.442Z] =================================================================================================================== 00:07:54.661 [2024-11-19T10:19:08.442Z] Total : 23207.10 90.65 0.00 0.00 5512.65 3262.55 11397.57 00:07:54.661 { 00:07:54.661 "results": [ 00:07:54.661 { 00:07:54.661 "job": "Nvme0n1", 00:07:54.661 "core_mask": "0x2", 00:07:54.661 "workload": "randwrite", 00:07:54.661 "status": "finished", 00:07:54.661 "queue_depth": 128, 00:07:54.661 "io_size": 4096, 00:07:54.661 "runtime": 10.002325, 00:07:54.661 "iops": 23207.104348239034, 00:07:54.661 "mibps": 90.65275136030873, 00:07:54.661 "io_failed": 0, 00:07:54.661 "io_timeout": 0, 00:07:54.661 "avg_latency_us": 5512.6464976685165, 00:07:54.661 "min_latency_us": 3262.553043478261, 00:07:54.661 "max_latency_us": 11397.565217391304 00:07:54.661 } 00:07:54.661 ], 00:07:54.661 "core_count": 1 00:07:54.661 } 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2117841 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2117841 ']' 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2117841 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117841 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117841' 00:07:54.661 killing process with pid 2117841 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2117841 00:07:54.661 Received shutdown signal, test time was about 10.000000 seconds 00:07:54.661 00:07:54.661 Latency(us) 00:07:54.661 [2024-11-19T10:19:08.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.661 [2024-11-19T10:19:08.442Z] =================================================================================================================== 00:07:54.661 [2024-11-19T10:19:08.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2117841 00:07:54.661 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:54.920 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:55.179 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:55.179 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:55.439 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:55.439 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:55.439 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2114580 00:07:55.439 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2114580 00:07:55.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2114580 Killed "${NVMF_APP[@]}" "$@" 00:07:55.439 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:55.439 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:55.439 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:55.439 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:55.439 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.439 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2119773 00:07:55.439 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2119773 00:07:55.439 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:55.439 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2119773 ']' 00:07:55.439 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.439 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.439 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.439 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.439 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.439 [2024-11-19 11:19:09.051470] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:55.440 [2024-11-19 11:19:09.051519] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.440 [2024-11-19 11:19:09.130045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.440 [2024-11-19 11:19:09.171741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.440 [2024-11-19 11:19:09.171775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.440 [2024-11-19 11:19:09.171782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.440 [2024-11-19 11:19:09.171789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.440 [2024-11-19 11:19:09.171794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.440 [2024-11-19 11:19:09.172354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.699 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.699 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:55.699 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.699 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:55.699 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.699 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.699 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:55.699 [2024-11-19 11:19:09.470292] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:55.699 [2024-11-19 11:19:09.470371] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:55.699 [2024-11-19 11:19:09.470398] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:55.959 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:55.959 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 32b12022-fc7c-4778-9778-756bad03d10d 00:07:55.959 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=32b12022-fc7c-4778-9778-756bad03d10d 00:07:55.959 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.959 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:55.959 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.959 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.959 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:55.959 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 32b12022-fc7c-4778-9778-756bad03d10d -t 2000 00:07:56.218 [ 00:07:56.218 { 00:07:56.218 "name": "32b12022-fc7c-4778-9778-756bad03d10d", 00:07:56.218 "aliases": [ 00:07:56.218 "lvs/lvol" 00:07:56.218 ], 00:07:56.218 "product_name": "Logical Volume", 00:07:56.218 "block_size": 4096, 00:07:56.218 "num_blocks": 38912, 00:07:56.218 "uuid": "32b12022-fc7c-4778-9778-756bad03d10d", 00:07:56.218 "assigned_rate_limits": { 00:07:56.218 "rw_ios_per_sec": 0, 00:07:56.218 "rw_mbytes_per_sec": 0, 00:07:56.218 "r_mbytes_per_sec": 0, 00:07:56.218 "w_mbytes_per_sec": 0 00:07:56.218 }, 00:07:56.218 "claimed": false, 00:07:56.218 "zoned": false, 00:07:56.218 "supported_io_types": { 00:07:56.218 "read": true, 00:07:56.218 "write": true, 00:07:56.218 "unmap": true, 00:07:56.218 "flush": false, 00:07:56.218 "reset": true, 00:07:56.218 "nvme_admin": false, 00:07:56.218 "nvme_io": false, 00:07:56.218 "nvme_io_md": false, 00:07:56.218 "write_zeroes": true, 00:07:56.218 "zcopy": false, 00:07:56.218 "get_zone_info": false, 00:07:56.218 "zone_management": false, 00:07:56.218 "zone_append": false, 00:07:56.218 "compare": false, 00:07:56.218 "compare_and_write": false, 00:07:56.218 "abort": false, 00:07:56.218 "seek_hole": true, 00:07:56.218 "seek_data": true, 00:07:56.218 "copy": false, 00:07:56.218 "nvme_iov_md": false 00:07:56.218 }, 00:07:56.218 "driver_specific": { 00:07:56.218 "lvol": { 00:07:56.218 "lvol_store_uuid": "2b3df388-3e04-4d2c-add0-6ef547b2dc84", 00:07:56.218 "base_bdev": "aio_bdev", 00:07:56.218 "thin_provision": false, 00:07:56.218 "num_allocated_clusters": 38, 00:07:56.218 "snapshot": false, 00:07:56.218 "clone": false, 00:07:56.218 "esnap_clone": false 00:07:56.218 } 00:07:56.218 } 00:07:56.218 } 00:07:56.218 ] 00:07:56.218 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:56.218 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:56.218 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:56.477 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:56.477 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:56.477 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:56.734 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:56.735 [2024-11-19 11:19:10.467256] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:56.735 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:56.994 request: 00:07:56.994 { 00:07:56.994 "uuid": "2b3df388-3e04-4d2c-add0-6ef547b2dc84", 00:07:56.994 "method": "bdev_lvol_get_lvstores", 00:07:56.994 "req_id": 1 00:07:56.994 } 00:07:56.994 Got JSON-RPC error response 00:07:56.994 response: 00:07:56.994 { 00:07:56.994 "code": -19, 00:07:56.994 "message": "No such device" 00:07:56.994 } 00:07:56.994 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:56.994 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.994 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.994 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.994 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:57.254 aio_bdev 00:07:57.254 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 32b12022-fc7c-4778-9778-756bad03d10d 00:07:57.254 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=32b12022-fc7c-4778-9778-756bad03d10d 00:07:57.254 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.254 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:57.254 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.254 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.254 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:57.512 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 32b12022-fc7c-4778-9778-756bad03d10d -t 2000 00:07:57.512 [ 00:07:57.512 { 00:07:57.512 "name": "32b12022-fc7c-4778-9778-756bad03d10d", 00:07:57.512 "aliases": [ 00:07:57.512 "lvs/lvol" 00:07:57.512 ], 00:07:57.512 "product_name": "Logical Volume", 00:07:57.512 "block_size": 4096, 00:07:57.512 "num_blocks": 38912, 00:07:57.513 "uuid": "32b12022-fc7c-4778-9778-756bad03d10d", 00:07:57.513 "assigned_rate_limits": { 00:07:57.513 "rw_ios_per_sec": 0, 00:07:57.513 "rw_mbytes_per_sec": 0, 00:07:57.513 "r_mbytes_per_sec": 0, 00:07:57.513 "w_mbytes_per_sec": 0 00:07:57.513 }, 00:07:57.513 "claimed": false, 00:07:57.513 "zoned": false, 00:07:57.513 "supported_io_types": { 00:07:57.513 "read": true, 00:07:57.513 "write": true, 00:07:57.513 "unmap": true, 00:07:57.513 "flush": false, 00:07:57.513 "reset": true, 00:07:57.513 "nvme_admin": false, 00:07:57.513 "nvme_io": false, 00:07:57.513 "nvme_io_md": false, 00:07:57.513 "write_zeroes": true, 00:07:57.513 "zcopy": false, 00:07:57.513 "get_zone_info": false, 00:07:57.513 "zone_management": false, 00:07:57.513 "zone_append": false, 00:07:57.513 "compare": false, 00:07:57.513 "compare_and_write": false, 00:07:57.513 "abort": false, 00:07:57.513 "seek_hole": true, 00:07:57.513 "seek_data": true, 00:07:57.513 "copy": false, 00:07:57.513 "nvme_iov_md": false 00:07:57.513 }, 00:07:57.513 "driver_specific": { 00:07:57.513 "lvol": { 00:07:57.513 "lvol_store_uuid": "2b3df388-3e04-4d2c-add0-6ef547b2dc84", 00:07:57.513 "base_bdev": "aio_bdev", 00:07:57.513 "thin_provision": false, 00:07:57.513 "num_allocated_clusters": 38, 00:07:57.513 "snapshot": false, 00:07:57.513 "clone": false, 00:07:57.513 "esnap_clone": false 00:07:57.513 } 00:07:57.513 } 00:07:57.513 } 00:07:57.513 ] 00:07:57.513 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:57.513 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:57.513 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:57.772 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:57.772 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:57.772 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:58.031 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:58.031 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 32b12022-fc7c-4778-9778-756bad03d10d 00:07:58.291 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2b3df388-3e04-4d2c-add0-6ef547b2dc84 00:07:58.291 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:58.550 00:07:58.550 real 0m16.941s 00:07:58.550 user 0m44.478s 00:07:58.550 sys 0m3.732s 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:58.550 ************************************ 00:07:58.550 END TEST lvs_grow_dirty 00:07:58.550 ************************************ 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:58.550 nvmf_trace.0 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.550 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.550 rmmod nvme_tcp 00:07:58.810 rmmod nvme_fabrics 00:07:58.810 rmmod nvme_keyring 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2119773 ']' 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2119773 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2119773 ']' 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2119773 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2119773 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2119773' 00:07:58.810 killing process with pid 2119773 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2119773 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2119773 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.810 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.069 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.069 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.069 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.069 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.069 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.975 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.975 00:08:00.975 real 0m42.437s 00:08:00.975 user 1m5.435s 00:08:00.975 sys 0m10.220s 00:08:00.975 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.975 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:00.975 ************************************ 00:08:00.975 END TEST nvmf_lvs_grow 00:08:00.975 ************************************ 00:08:00.975 11:19:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:00.975 11:19:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:00.975 11:19:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.975 11:19:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.975 ************************************ 00:08:00.975 START TEST nvmf_bdev_io_wait 00:08:00.975 ************************************ 00:08:00.975 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:01.235 * Looking for test storage... 00:08:01.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.235 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.236 --rc genhtml_branch_coverage=1 00:08:01.236 --rc genhtml_function_coverage=1 00:08:01.236 --rc genhtml_legend=1 00:08:01.236 --rc geninfo_all_blocks=1 00:08:01.236 --rc geninfo_unexecuted_blocks=1 00:08:01.236 00:08:01.236 ' 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.236 --rc genhtml_branch_coverage=1 00:08:01.236 --rc genhtml_function_coverage=1 00:08:01.236 --rc genhtml_legend=1 00:08:01.236 --rc geninfo_all_blocks=1 00:08:01.236 --rc geninfo_unexecuted_blocks=1 00:08:01.236 00:08:01.236 ' 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.236 --rc genhtml_branch_coverage=1 00:08:01.236 --rc genhtml_function_coverage=1 00:08:01.236 --rc genhtml_legend=1 00:08:01.236 --rc geninfo_all_blocks=1 00:08:01.236 --rc geninfo_unexecuted_blocks=1 00:08:01.236 00:08:01.236 ' 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.236 --rc genhtml_branch_coverage=1 00:08:01.236 --rc genhtml_function_coverage=1 00:08:01.236 --rc genhtml_legend=1 00:08:01.236 --rc geninfo_all_blocks=1 00:08:01.236 --rc geninfo_unexecuted_blocks=1 00:08:01.236 00:08:01.236 ' 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.236 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.237 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.237 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:01.237 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:01.237 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.237 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:07.813 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:07.814 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:07.814 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:07.814 Found net devices under 0000:86:00.0: cvl_0_0 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:07.814 Found net devices under 0000:86:00.1: cvl_0_1 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:07.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:08:07.814 00:08:07.814 --- 10.0.0.2 ping statistics --- 00:08:07.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.814 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:08:07.814 00:08:07.814 --- 10.0.0.1 ping statistics --- 00:08:07.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.814 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2123938 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2123938 00:08:07.814 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2123938 ']' 00:08:07.815 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.815 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.815 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.815 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.815 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.815 [2024-11-19 11:19:20.986482] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:07.815 [2024-11-19 11:19:20.986534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.815 [2024-11-19 11:19:21.065798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.815 [2024-11-19 11:19:21.108858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.815 [2024-11-19 11:19:21.108898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.815 [2024-11-19 11:19:21.108906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.815 [2024-11-19 11:19:21.108912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.815 [2024-11-19 11:19:21.108918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.815 [2024-11-19 11:19:21.110541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.815 [2024-11-19 11:19:21.110649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.815 [2024-11-19 11:19:21.110758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.815 [2024-11-19 11:19:21.110759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.815 [2024-11-19 11:19:21.259390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.815 Malloc0 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.815 [2024-11-19 11:19:21.310985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2124071 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2124073 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.815 { 00:08:07.815 "params": { 00:08:07.815 "name": "Nvme$subsystem", 00:08:07.815 "trtype": "$TEST_TRANSPORT", 00:08:07.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.815 "adrfam": "ipv4", 00:08:07.815 "trsvcid": "$NVMF_PORT", 00:08:07.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.815 "hdgst": ${hdgst:-false}, 00:08:07.815 "ddgst": ${ddgst:-false} 00:08:07.815 }, 00:08:07.815 "method": "bdev_nvme_attach_controller" 00:08:07.815 } 00:08:07.815 EOF 00:08:07.815 )") 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2124075 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.815 { 00:08:07.815 "params": { 00:08:07.815 "name": "Nvme$subsystem", 00:08:07.815 "trtype": "$TEST_TRANSPORT", 00:08:07.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.815 "adrfam": "ipv4", 00:08:07.815 "trsvcid": "$NVMF_PORT", 00:08:07.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.815 "hdgst": ${hdgst:-false}, 00:08:07.815 "ddgst": ${ddgst:-false} 00:08:07.815 }, 00:08:07.815 "method": "bdev_nvme_attach_controller" 00:08:07.815 } 00:08:07.815 EOF 00:08:07.815 )") 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2124078 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:07.815 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.816 { 00:08:07.816 "params": { 00:08:07.816 "name": "Nvme$subsystem", 00:08:07.816 "trtype": "$TEST_TRANSPORT", 00:08:07.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.816 "adrfam": "ipv4", 00:08:07.816 "trsvcid": "$NVMF_PORT", 00:08:07.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.816 "hdgst": ${hdgst:-false}, 00:08:07.816 "ddgst": ${ddgst:-false} 00:08:07.816 }, 00:08:07.816 "method": "bdev_nvme_attach_controller" 00:08:07.816 } 00:08:07.816 EOF 00:08:07.816 )") 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.816 { 00:08:07.816 "params": { 00:08:07.816 "name": "Nvme$subsystem", 00:08:07.816 "trtype": "$TEST_TRANSPORT", 00:08:07.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.816 "adrfam": "ipv4", 00:08:07.816 "trsvcid": "$NVMF_PORT", 00:08:07.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.816 "hdgst": ${hdgst:-false}, 00:08:07.816 "ddgst": ${ddgst:-false} 00:08:07.816 }, 00:08:07.816 "method": "bdev_nvme_attach_controller" 00:08:07.816 } 00:08:07.816 EOF 00:08:07.816 )") 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2124071 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.816 "params": { 00:08:07.816 "name": "Nvme1", 00:08:07.816 "trtype": "tcp", 00:08:07.816 "traddr": "10.0.0.2", 00:08:07.816 "adrfam": "ipv4", 00:08:07.816 "trsvcid": "4420", 00:08:07.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.816 "hdgst": false, 00:08:07.816 "ddgst": false 00:08:07.816 }, 00:08:07.816 "method": "bdev_nvme_attach_controller" 00:08:07.816 }' 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.816 "params": { 00:08:07.816 "name": "Nvme1", 00:08:07.816 "trtype": "tcp", 00:08:07.816 "traddr": "10.0.0.2", 00:08:07.816 "adrfam": "ipv4", 00:08:07.816 "trsvcid": "4420", 00:08:07.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.816 "hdgst": false, 00:08:07.816 "ddgst": false 00:08:07.816 }, 00:08:07.816 "method": "bdev_nvme_attach_controller" 00:08:07.816 }' 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.816 "params": { 00:08:07.816 "name": "Nvme1", 00:08:07.816 "trtype": "tcp", 00:08:07.816 "traddr": "10.0.0.2", 00:08:07.816 "adrfam": "ipv4", 00:08:07.816 "trsvcid": "4420", 00:08:07.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.816 "hdgst": false, 00:08:07.816 "ddgst": false 00:08:07.816 }, 00:08:07.816 "method": "bdev_nvme_attach_controller" 00:08:07.816 }' 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.816 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.816 "params": { 00:08:07.816 "name": "Nvme1", 00:08:07.816 "trtype": "tcp", 00:08:07.816 "traddr": "10.0.0.2", 00:08:07.816 "adrfam": "ipv4", 00:08:07.816 "trsvcid": "4420", 00:08:07.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.816 "hdgst": false, 00:08:07.816 "ddgst": false 00:08:07.816 }, 00:08:07.816 "method": "bdev_nvme_attach_controller" 00:08:07.816 }' 00:08:07.816 [2024-11-19 11:19:21.359848] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:07.816 [2024-11-19 11:19:21.359897] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:07.816 [2024-11-19 11:19:21.364244] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:07.816 [2024-11-19 11:19:21.364284] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:07.816 [2024-11-19 11:19:21.365343] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:07.816 [2024-11-19 11:19:21.365387] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:07.816 [2024-11-19 11:19:21.365988] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:07.816 [2024-11-19 11:19:21.366030] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:07.816 [2024-11-19 11:19:21.543182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.816 [2024-11-19 11:19:21.586169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:08.075 [2024-11-19 11:19:21.635966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.075 [2024-11-19 11:19:21.678974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:08.075 [2024-11-19 11:19:21.735272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.075 [2024-11-19 11:19:21.790209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:08.075 [2024-11-19 11:19:21.797654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.075 [2024-11-19 11:19:21.840410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:08.332 Running I/O for 1 seconds... 00:08:08.332 Running I/O for 1 seconds... 00:08:08.332 Running I/O for 1 seconds... 00:08:08.332 Running I/O for 1 seconds... 00:08:09.265 8789.00 IOPS, 34.33 MiB/s [2024-11-19T10:19:23.046Z] 247360.00 IOPS, 966.25 MiB/s 00:08:09.265 Latency(us) 00:08:09.265 [2024-11-19T10:19:23.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.266 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:09.266 Nvme1n1 : 1.00 246979.25 964.76 0.00 0.00 516.23 227.95 1531.55 00:08:09.266 [2024-11-19T10:19:23.047Z] =================================================================================================================== 00:08:09.266 [2024-11-19T10:19:23.047Z] Total : 246979.25 964.76 0.00 0.00 516.23 227.95 1531.55 00:08:09.266 00:08:09.266 Latency(us) 00:08:09.266 [2024-11-19T10:19:23.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.266 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:09.266 Nvme1n1 : 1.02 8798.35 34.37 0.00 0.00 14475.41 5385.35 19717.79 00:08:09.266 [2024-11-19T10:19:23.047Z] =================================================================================================================== 00:08:09.266 [2024-11-19T10:19:23.047Z] Total : 8798.35 34.37 0.00 0.00 14475.41 5385.35 19717.79 00:08:09.524 7725.00 IOPS, 30.18 MiB/s 00:08:09.524 Latency(us) 00:08:09.524 [2024-11-19T10:19:23.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.524 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:09.524 Nvme1n1 : 1.01 7820.97 30.55 0.00 0.00 16317.14 4530.53 34192.70 00:08:09.524 [2024-11-19T10:19:23.305Z] =================================================================================================================== 00:08:09.524 [2024-11-19T10:19:23.305Z] Total : 7820.97 30.55 0.00 0.00 16317.14 4530.53 34192.70 00:08:09.524 11093.00 IOPS, 43.33 MiB/s 00:08:09.524 Latency(us) 00:08:09.524 [2024-11-19T10:19:23.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.524 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:09.524 Nvme1n1 : 1.01 11157.50 43.58 0.00 0.00 11435.62 4701.50 22339.23 00:08:09.524 [2024-11-19T10:19:23.305Z] =================================================================================================================== 00:08:09.524 [2024-11-19T10:19:23.305Z] Total : 11157.50 43.58 0.00 0.00 11435.62 4701.50 22339.23 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2124073 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2124075 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2124078 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.524 rmmod nvme_tcp 00:08:09.524 rmmod nvme_fabrics 00:08:09.524 rmmod nvme_keyring 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:09.524 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2123938 ']' 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2123938 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2123938 ']' 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2123938 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2123938 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2123938' 00:08:09.784 killing process with pid 2123938 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2123938 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2123938 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.784 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.325 00:08:12.325 real 0m10.859s 00:08:12.325 user 0m16.518s 00:08:12.325 sys 0m6.217s 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.325 ************************************ 00:08:12.325 END TEST nvmf_bdev_io_wait 00:08:12.325 ************************************ 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.325 ************************************ 00:08:12.325 START TEST nvmf_queue_depth 00:08:12.325 ************************************ 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:12.325 * Looking for test storage... 00:08:12.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.325 --rc genhtml_branch_coverage=1 00:08:12.325 --rc genhtml_function_coverage=1 00:08:12.325 --rc genhtml_legend=1 00:08:12.325 --rc geninfo_all_blocks=1 00:08:12.325 --rc geninfo_unexecuted_blocks=1 00:08:12.325 00:08:12.325 ' 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.325 --rc genhtml_branch_coverage=1 00:08:12.325 --rc genhtml_function_coverage=1 00:08:12.325 --rc genhtml_legend=1 00:08:12.325 --rc geninfo_all_blocks=1 00:08:12.325 --rc geninfo_unexecuted_blocks=1 00:08:12.325 00:08:12.325 ' 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.325 --rc genhtml_branch_coverage=1 00:08:12.325 --rc genhtml_function_coverage=1 00:08:12.325 --rc genhtml_legend=1 00:08:12.325 --rc geninfo_all_blocks=1 00:08:12.325 --rc geninfo_unexecuted_blocks=1 00:08:12.325 00:08:12.325 ' 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.325 --rc genhtml_branch_coverage=1 00:08:12.325 --rc genhtml_function_coverage=1 00:08:12.325 --rc genhtml_legend=1 00:08:12.325 --rc geninfo_all_blocks=1 00:08:12.325 --rc geninfo_unexecuted_blocks=1 00:08:12.325 00:08:12.325 ' 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.325 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:12.326 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:18.897 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:18.897 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.897 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:18.898 Found net devices under 0000:86:00.0: cvl_0_0 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:18.898 Found net devices under 0000:86:00.1: cvl_0_1 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:18.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:08:18.898 00:08:18.898 --- 10.0.0.2 ping statistics --- 00:08:18.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.898 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:08:18.898 00:08:18.898 --- 10.0.0.1 ping statistics --- 00:08:18.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.898 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2127878 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2127878 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2127878 ']' 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.898 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.898 [2024-11-19 11:19:31.891482] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:18.898 [2024-11-19 11:19:31.891532] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.898 [2024-11-19 11:19:31.976796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.898 [2024-11-19 11:19:32.019937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.898 [2024-11-19 11:19:32.019973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.898 [2024-11-19 11:19:32.019981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.898 [2024-11-19 11:19:32.019987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.898 [2024-11-19 11:19:32.019992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.898 [2024-11-19 11:19:32.020530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.898 [2024-11-19 11:19:32.159785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:18.898 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.899 Malloc0 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.899 [2024-11-19 11:19:32.209936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2128015 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2128015 /var/tmp/bdevperf.sock 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2128015 ']' 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:18.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.899 [2024-11-19 11:19:32.259151] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:18.899 [2024-11-19 11:19:32.259192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128015 ] 00:08:18.899 [2024-11-19 11:19:32.333634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.899 [2024-11-19 11:19:32.377038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.899 NVMe0n1 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.899 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:18.899 Running I/O for 10 seconds... 00:08:21.212 11485.00 IOPS, 44.86 MiB/s [2024-11-19T10:19:35.928Z] 11843.50 IOPS, 46.26 MiB/s [2024-11-19T10:19:36.863Z] 11939.67 IOPS, 46.64 MiB/s [2024-11-19T10:19:37.800Z] 12059.50 IOPS, 47.11 MiB/s [2024-11-19T10:19:38.737Z] 12177.00 IOPS, 47.57 MiB/s [2024-11-19T10:19:39.675Z] 12191.00 IOPS, 47.62 MiB/s [2024-11-19T10:19:41.054Z] 12184.43 IOPS, 47.60 MiB/s [2024-11-19T10:19:41.992Z] 12163.88 IOPS, 47.52 MiB/s [2024-11-19T10:19:42.928Z] 12209.89 IOPS, 47.69 MiB/s [2024-11-19T10:19:42.928Z] 12243.80 IOPS, 47.83 MiB/s 00:08:29.147 Latency(us) 00:08:29.147 [2024-11-19T10:19:42.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.147 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:29.147 Verification LBA range: start 0x0 length 0x4000 00:08:29.148 NVMe0n1 : 10.06 12254.88 47.87 0.00 0.00 83263.68 19375.86 53340.61 00:08:29.148 [2024-11-19T10:19:42.929Z] =================================================================================================================== 00:08:29.148 [2024-11-19T10:19:42.929Z] Total : 12254.88 47.87 0.00 0.00 83263.68 19375.86 53340.61 00:08:29.148 { 00:08:29.148 "results": [ 00:08:29.148 { 00:08:29.148 "job": "NVMe0n1", 00:08:29.148 "core_mask": "0x1", 00:08:29.148 "workload": "verify", 00:08:29.148 "status": "finished", 00:08:29.148 "verify_range": { 00:08:29.148 "start": 0, 00:08:29.148 "length": 16384 00:08:29.148 }, 00:08:29.148 "queue_depth": 1024, 00:08:29.148 "io_size": 4096, 00:08:29.148 "runtime": 10.064566, 00:08:29.148 "iops": 12254.875172958278, 00:08:29.148 "mibps": 47.870606144368274, 00:08:29.148 "io_failed": 0, 00:08:29.148 "io_timeout": 0, 00:08:29.148 "avg_latency_us": 83263.68136310375, 00:08:29.148 "min_latency_us": 19375.86086956522, 00:08:29.148 "max_latency_us": 53340.605217391305 00:08:29.148 } 00:08:29.148 ], 00:08:29.148 "core_count": 1 00:08:29.148 } 00:08:29.148 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2128015 00:08:29.148 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2128015 ']' 00:08:29.148 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2128015 00:08:29.148 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:29.148 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.148 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2128015 00:08:29.148 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.148 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.148 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2128015' 00:08:29.148 killing process with pid 2128015 00:08:29.148 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2128015 00:08:29.148 Received shutdown signal, test time was about 10.000000 seconds 00:08:29.148 00:08:29.148 Latency(us) 00:08:29.148 [2024-11-19T10:19:42.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.148 [2024-11-19T10:19:42.929Z] =================================================================================================================== 00:08:29.148 [2024-11-19T10:19:42.929Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:29.148 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2128015 00:08:29.407 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:29.407 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:29.407 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:29.407 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:29.407 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.407 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:29.407 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.407 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.407 rmmod nvme_tcp 00:08:29.407 rmmod nvme_fabrics 00:08:29.407 rmmod nvme_keyring 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2127878 ']' 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2127878 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2127878 ']' 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2127878 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2127878 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2127878' 00:08:29.407 killing process with pid 2127878 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2127878 00:08:29.407 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2127878 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.667 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.572 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:31.572 00:08:31.572 real 0m19.672s 00:08:31.572 user 0m22.965s 00:08:31.572 sys 0m6.081s 00:08:31.572 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.572 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.572 ************************************ 00:08:31.572 END TEST nvmf_queue_depth 00:08:31.572 ************************************ 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.831 ************************************ 00:08:31.831 START TEST nvmf_target_multipath 00:08:31.831 ************************************ 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:31.831 * Looking for test storage... 00:08:31.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.831 --rc genhtml_branch_coverage=1 00:08:31.831 --rc genhtml_function_coverage=1 00:08:31.831 --rc genhtml_legend=1 00:08:31.831 --rc geninfo_all_blocks=1 00:08:31.831 --rc geninfo_unexecuted_blocks=1 00:08:31.831 00:08:31.831 ' 00:08:31.831 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.831 --rc genhtml_branch_coverage=1 00:08:31.831 --rc genhtml_function_coverage=1 00:08:31.831 --rc genhtml_legend=1 00:08:31.831 --rc geninfo_all_blocks=1 00:08:31.831 --rc geninfo_unexecuted_blocks=1 00:08:31.832 00:08:31.832 ' 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.832 --rc genhtml_branch_coverage=1 00:08:31.832 --rc genhtml_function_coverage=1 00:08:31.832 --rc genhtml_legend=1 00:08:31.832 --rc geninfo_all_blocks=1 00:08:31.832 --rc geninfo_unexecuted_blocks=1 00:08:31.832 00:08:31.832 ' 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.832 --rc genhtml_branch_coverage=1 00:08:31.832 --rc genhtml_function_coverage=1 00:08:31.832 --rc genhtml_legend=1 00:08:31.832 --rc geninfo_all_blocks=1 00:08:31.832 --rc geninfo_unexecuted_blocks=1 00:08:31.832 00:08:31.832 ' 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.832 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:32.091 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:38.663 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:38.663 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.663 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:38.664 Found net devices under 0000:86:00.0: cvl_0_0 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:38.664 Found net devices under 0000:86:00.1: cvl_0_1 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:38.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:08:38.664 00:08:38.664 --- 10.0.0.2 ping statistics --- 00:08:38.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.664 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:08:38.664 00:08:38.664 --- 10.0.0.1 ping statistics --- 00:08:38.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.664 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:38.664 only one NIC for nvmf test 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.664 rmmod nvme_tcp 00:08:38.664 rmmod nvme_fabrics 00:08:38.664 rmmod nvme_keyring 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.664 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.044 00:08:40.044 real 0m8.403s 00:08:40.044 user 0m1.836s 00:08:40.044 sys 0m4.581s 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.044 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:40.044 ************************************ 00:08:40.044 END TEST nvmf_target_multipath 00:08:40.044 ************************************ 00:08:40.305 11:19:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:40.305 11:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.305 11:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.305 11:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.305 ************************************ 00:08:40.305 START TEST nvmf_zcopy 00:08:40.305 ************************************ 00:08:40.305 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:40.305 * Looking for test storage... 00:08:40.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.305 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.305 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.305 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.305 --rc genhtml_branch_coverage=1 00:08:40.305 --rc genhtml_function_coverage=1 00:08:40.305 --rc genhtml_legend=1 00:08:40.305 --rc geninfo_all_blocks=1 00:08:40.305 --rc geninfo_unexecuted_blocks=1 00:08:40.305 00:08:40.305 ' 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.305 --rc genhtml_branch_coverage=1 00:08:40.305 --rc genhtml_function_coverage=1 00:08:40.305 --rc genhtml_legend=1 00:08:40.305 --rc geninfo_all_blocks=1 00:08:40.305 --rc geninfo_unexecuted_blocks=1 00:08:40.305 00:08:40.305 ' 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.305 --rc genhtml_branch_coverage=1 00:08:40.305 --rc genhtml_function_coverage=1 00:08:40.305 --rc genhtml_legend=1 00:08:40.305 --rc geninfo_all_blocks=1 00:08:40.305 --rc geninfo_unexecuted_blocks=1 00:08:40.305 00:08:40.305 ' 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.305 --rc genhtml_branch_coverage=1 00:08:40.305 --rc genhtml_function_coverage=1 00:08:40.305 --rc genhtml_legend=1 00:08:40.305 --rc geninfo_all_blocks=1 00:08:40.305 --rc geninfo_unexecuted_blocks=1 00:08:40.305 00:08:40.305 ' 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.305 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:40.306 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.565 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:40.565 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.565 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.565 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.565 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.565 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.565 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:40.566 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.136 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:47.137 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:47.137 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:47.137 Found net devices under 0000:86:00.0: cvl_0_0 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:47.137 Found net devices under 0000:86:00.1: cvl_0_1 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.137 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:08:47.137 00:08:47.137 --- 10.0.0.2 ping statistics --- 00:08:47.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.137 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:08:47.137 00:08:47.137 --- 10.0.0.1 ping statistics --- 00:08:47.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.137 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2136815 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2136815 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2136815 ']' 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.137 [2024-11-19 11:20:00.150744] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:47.137 [2024-11-19 11:20:00.150791] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.137 [2024-11-19 11:20:00.231580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.137 [2024-11-19 11:20:00.271324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.137 [2024-11-19 11:20:00.271363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.137 [2024-11-19 11:20:00.271371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.137 [2024-11-19 11:20:00.271380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.137 [2024-11-19 11:20:00.271385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.137 [2024-11-19 11:20:00.271967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.137 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.138 [2024-11-19 11:20:00.420273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.138 [2024-11-19 11:20:00.440468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.138 malloc0 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:47.138 { 00:08:47.138 "params": { 00:08:47.138 "name": "Nvme$subsystem", 00:08:47.138 "trtype": "$TEST_TRANSPORT", 00:08:47.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.138 "adrfam": "ipv4", 00:08:47.138 "trsvcid": "$NVMF_PORT", 00:08:47.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.138 "hdgst": ${hdgst:-false}, 00:08:47.138 "ddgst": ${ddgst:-false} 00:08:47.138 }, 00:08:47.138 "method": "bdev_nvme_attach_controller" 00:08:47.138 } 00:08:47.138 EOF 00:08:47.138 )") 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:47.138 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:47.138 "params": { 00:08:47.138 "name": "Nvme1", 00:08:47.138 "trtype": "tcp", 00:08:47.138 "traddr": "10.0.0.2", 00:08:47.138 "adrfam": "ipv4", 00:08:47.138 "trsvcid": "4420", 00:08:47.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:47.138 "hdgst": false, 00:08:47.138 "ddgst": false 00:08:47.138 }, 00:08:47.138 "method": "bdev_nvme_attach_controller" 00:08:47.138 }' 00:08:47.138 [2024-11-19 11:20:00.524769] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:47.138 [2024-11-19 11:20:00.524817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2137034 ] 00:08:47.138 [2024-11-19 11:20:00.602350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.138 [2024-11-19 11:20:00.644732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.138 Running I/O for 10 seconds... 00:08:49.141 8432.00 IOPS, 65.88 MiB/s [2024-11-19T10:20:04.298Z] 8490.00 IOPS, 66.33 MiB/s [2024-11-19T10:20:04.866Z] 8528.33 IOPS, 66.63 MiB/s [2024-11-19T10:20:06.242Z] 8534.00 IOPS, 66.67 MiB/s [2024-11-19T10:20:07.178Z] 8538.60 IOPS, 66.71 MiB/s [2024-11-19T10:20:08.114Z] 8543.17 IOPS, 66.74 MiB/s [2024-11-19T10:20:09.050Z] 8552.71 IOPS, 66.82 MiB/s [2024-11-19T10:20:09.986Z] 8559.00 IOPS, 66.87 MiB/s [2024-11-19T10:20:10.922Z] 8559.00 IOPS, 66.87 MiB/s [2024-11-19T10:20:10.922Z] 8544.40 IOPS, 66.75 MiB/s 00:08:57.141 Latency(us) 00:08:57.141 [2024-11-19T10:20:10.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.141 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:57.141 Verification LBA range: start 0x0 length 0x1000 00:08:57.141 Nvme1n1 : 10.01 8546.93 66.77 0.00 0.00 14933.85 1674.02 23251.03 00:08:57.141 [2024-11-19T10:20:10.922Z] =================================================================================================================== 00:08:57.141 [2024-11-19T10:20:10.922Z] Total : 8546.93 66.77 0.00 0.00 14933.85 1674.02 23251.03 00:08:57.400 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2138665 00:08:57.400 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:57.400 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.400 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:57.400 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:57.400 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:57.400 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.400 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.400 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.400 { 00:08:57.400 "params": { 00:08:57.400 "name": "Nvme$subsystem", 00:08:57.400 "trtype": "$TEST_TRANSPORT", 00:08:57.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.400 "adrfam": "ipv4", 00:08:57.400 "trsvcid": "$NVMF_PORT", 00:08:57.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.400 "hdgst": ${hdgst:-false}, 00:08:57.400 "ddgst": ${ddgst:-false} 00:08:57.400 }, 00:08:57.400 "method": "bdev_nvme_attach_controller" 00:08:57.400 } 00:08:57.401 EOF 00:08:57.401 )") 00:08:57.401 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:57.401 [2024-11-19 11:20:11.050973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.401 [2024-11-19 11:20:11.051009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.401 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:57.401 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:57.401 11:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.401 "params": { 00:08:57.401 "name": "Nvme1", 00:08:57.401 "trtype": "tcp", 00:08:57.401 "traddr": "10.0.0.2", 00:08:57.401 "adrfam": "ipv4", 00:08:57.401 "trsvcid": "4420", 00:08:57.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.401 "hdgst": false, 00:08:57.401 "ddgst": false 00:08:57.401 }, 00:08:57.401 "method": "bdev_nvme_attach_controller" 00:08:57.401 }' 00:08:57.401 [2024-11-19 11:20:11.062970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.401 [2024-11-19 11:20:11.062984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.401 [2024-11-19 11:20:11.074992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.401 [2024-11-19 11:20:11.075002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.401 [2024-11-19 11:20:11.087023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.401 [2024-11-19 11:20:11.087034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.401 [2024-11-19 11:20:11.090839] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:57.401 [2024-11-19 11:20:11.090879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138665 ] 00:08:57.401 [2024-11-19 11:20:11.099058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.401 [2024-11-19 11:20:11.099070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.401 [2024-11-19 11:20:11.111089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.401 [2024-11-19 11:20:11.111102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.401 [2024-11-19 11:20:11.123124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.401 [2024-11-19 11:20:11.123135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.401 [2024-11-19 11:20:11.135156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.401 [2024-11-19 11:20:11.135166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.401 [2024-11-19 11:20:11.147187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.401 [2024-11-19 11:20:11.147197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.401 [2024-11-19 11:20:11.159221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.401 [2024-11-19 11:20:11.159231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.401 [2024-11-19 11:20:11.166134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.401 [2024-11-19 11:20:11.171251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.401 [2024-11-19 11:20:11.171261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.183296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.183321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.195317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.195327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.207351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.207361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.209182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.660 [2024-11-19 11:20:11.219387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.219400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.231418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.231437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.243453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.243470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.255482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.255496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.267514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.267528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.279552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.279566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.291573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.291585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.303629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.303649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.315652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.315669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.327688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.327704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.339719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.339734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.351743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.351757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.363785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.363803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 Running I/O for 5 seconds... 00:08:57.660 [2024-11-19 11:20:11.375812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.375824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.391070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.391091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.400802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.400822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.415422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.415441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.428861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.428881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.660 [2024-11-19 11:20:11.438359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.660 [2024-11-19 11:20:11.438378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.452678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.452697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.466332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.466350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.480241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.480260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.494185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.494204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.508218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.508237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.522430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.522450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.536751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.536770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.550603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.550621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.564854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.564872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.579411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.579430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.595050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.595069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.608991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.609010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.623366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.623385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.634543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.634561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.648749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.648773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.662757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.662776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.676828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.676847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.920 [2024-11-19 11:20:11.690822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.920 [2024-11-19 11:20:11.690841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.704932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.704955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.718667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.718685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.732551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.732573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.746503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.746522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.760455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.760473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.774612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.774632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.788882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.788902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.802762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.802781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.812343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.812361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.826537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.826555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.840578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.840596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.854728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.854746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.868884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.868902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.883210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.883228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.897512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.897530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.911686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.911705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.922580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.922602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.937209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.937227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.180 [2024-11-19 11:20:11.951104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.180 [2024-11-19 11:20:11.951122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:11.965569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:11.965587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:11.981236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:11.981254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:11.995476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:11.995494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.009752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.009770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.023782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.023800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.037989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.038007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.051938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.051964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.065989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.066008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.080192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.080210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.094132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.094150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.108370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.108389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.119778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.119796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.134931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.134955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.145914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.145932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.160560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.160579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.172148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.172166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.186973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.187000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.197893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.197910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.439 [2024-11-19 11:20:12.212489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.439 [2024-11-19 11:20:12.212507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.227324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.227347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.242607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.242626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.252236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.252254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.266738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.266757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.280524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.280544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.294387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.294406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.308162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.308180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.322516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.322535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.333445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.333463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.343050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.343070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.357617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.357636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.371569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.371592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 16440.00 IOPS, 128.44 MiB/s [2024-11-19T10:20:12.480Z] [2024-11-19 11:20:12.385363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.385383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.399404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.399422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.413279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.413297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.427168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.427190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.441194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.441217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.455498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.455519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.699 [2024-11-19 11:20:12.466351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.699 [2024-11-19 11:20:12.466369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.480705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.480724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.493720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.493739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.507520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.507538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.521490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.521512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.535872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.535890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.550124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.550143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.564602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.564620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.575431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.575449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.590167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.590185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.601159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.601177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.615836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.615855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.626679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.626698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.641665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.641683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.656593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.656612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.670970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.670990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.684678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.684697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.699091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.699109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.712827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.712845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.958 [2024-11-19 11:20:12.727022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.958 [2024-11-19 11:20:12.727041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.741545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.741563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.756673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.756692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.770966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.770985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.785372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.785391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.796916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.796935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.811289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.811307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.824958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.824978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.839268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.839288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.850276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.850295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.864865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.864884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.878557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.878576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.892627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.892646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.906741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.906760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.920713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.920732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.934416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.934435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.948567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.948585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.957807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.957825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.972315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.972335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.217 [2024-11-19 11:20:12.986148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.217 [2024-11-19 11:20:12.986168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.000527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.000546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.011945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.011973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.021021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.021039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.035588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.035606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.049714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.049732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.063608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.063627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.078148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.078167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.089080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.089098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.103575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.103594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.117684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.117702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.128464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.128481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.142619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.142637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.156252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.156269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.170581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.170600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.184542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.184560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.198968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.198986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.213286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.213304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.227159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.227178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.241337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.476 [2024-11-19 11:20:13.241356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.476 [2024-11-19 11:20:13.255205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.477 [2024-11-19 11:20:13.255224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.269996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.270014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.285315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.285333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.299553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.299572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.312901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.312920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.327155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.327173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.341328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.341346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.355251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.355270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.369422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.369442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 16532.50 IOPS, 129.16 MiB/s [2024-11-19T10:20:13.517Z] [2024-11-19 11:20:13.382980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.383001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.397359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.397377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.411253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.411272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.425372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.425391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.436463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.436482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.446444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.446462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.460682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.460706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.474810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.474829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.489074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.489092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.500140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.500158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.736 [2024-11-19 11:20:13.514655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.736 [2024-11-19 11:20:13.514673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.528550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.528569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.542705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.542724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.556590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.556608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.570529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.570547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.581495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.581513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.595771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.595789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.609962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.609981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.624292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.624312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.638139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.638158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.652516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.652534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.663694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.663713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.677935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.677961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.692140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.692158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.705472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.705490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.719352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.719376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.733281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.733300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.747216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.747235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.995 [2024-11-19 11:20:13.761206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.995 [2024-11-19 11:20:13.761224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.775482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.775500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.789486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.789505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.803244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.803262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.817589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.817607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.831765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.831785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.845894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.845913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.856773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.856791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.871116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.871133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.884939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.884964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.899027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.899045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.913183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.913202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.923974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.923992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.938246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.938265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.952183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.952201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.966083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.966111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.975542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.975565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:13.985111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:13.985129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:14.000752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:14.000771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:14.015697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:14.015716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.254 [2024-11-19 11:20:14.030332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.254 [2024-11-19 11:20:14.030352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.513 [2024-11-19 11:20:14.044346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.513 [2024-11-19 11:20:14.044366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.513 [2024-11-19 11:20:14.058831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.513 [2024-11-19 11:20:14.058850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.513 [2024-11-19 11:20:14.074192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.513 [2024-11-19 11:20:14.074211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.513 [2024-11-19 11:20:14.084029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.513 [2024-11-19 11:20:14.084048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.513 [2024-11-19 11:20:14.098417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.098436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.112183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.112202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.126276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.126296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.140101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.140121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.153928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.153954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.167762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.167781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.181892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.181911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.195733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.195752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.210040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.210060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.223996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.224015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.237870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.237893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.251727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.251747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.266077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.266097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.277168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.277186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.514 [2024-11-19 11:20:14.291590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.514 [2024-11-19 11:20:14.291609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.773 [2024-11-19 11:20:14.305502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.773 [2024-11-19 11:20:14.305522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.773 [2024-11-19 11:20:14.319679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.773 [2024-11-19 11:20:14.319698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.773 [2024-11-19 11:20:14.329229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.329248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.343620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.343639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.357292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.357312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.372029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.372048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 16573.67 IOPS, 129.48 MiB/s [2024-11-19T10:20:14.555Z] [2024-11-19 11:20:14.387151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.387170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.401246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.401264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.415035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.415054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.429437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.429455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.443365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.443383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.457914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.457932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.468827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.468845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.483438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.483456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.497749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.497767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.511734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.511753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.526244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.526263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.774 [2024-11-19 11:20:14.541784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.774 [2024-11-19 11:20:14.541803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.556429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.556448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.571446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.571464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.580974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.580994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.595599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.595617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.609295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.609314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.624063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.624081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.639417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.639435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.654035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.654054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.663164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.663182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.677559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.677577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.690624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.690643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.705068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.705087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.718940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.718966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.733035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.733055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.747450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.747468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.758205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.758224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.772676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.772694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.786416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.786434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.033 [2024-11-19 11:20:14.800644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.033 [2024-11-19 11:20:14.800662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.814839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.814857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.828766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.828785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.842534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.842553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.856701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.856721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.870913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.870933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.884627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.884646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.898428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.898446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.912515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.912533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.926313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.926332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.940393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.940411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.954036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.954055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.968097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.968116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.982182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.982200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:14.996347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:14.996366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:15.005515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:15.005539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:15.019725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:15.019743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:15.033617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:15.033636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:15.047639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:15.047658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.292 [2024-11-19 11:20:15.061594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.292 [2024-11-19 11:20:15.061612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.551 [2024-11-19 11:20:15.072401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.072421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.087036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.087054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.101321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.101339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.115320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.115338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.129374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.129392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.143466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.143484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.157541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.157560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.171404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.171423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.185158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.185176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.199276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.199295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.213615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.213633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.224654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.224672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.239424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.239442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.250743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.250761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.265083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.265106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.279120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.279140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.292995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.293014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.306921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.306940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.552 [2024-11-19 11:20:15.320976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.552 [2024-11-19 11:20:15.320994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.335682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.335700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.350772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.350790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.364933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.364957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.379111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.379130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 16589.00 IOPS, 129.60 MiB/s [2024-11-19T10:20:15.592Z] [2024-11-19 11:20:15.392700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.392719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.407356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.407376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.418056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.418075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.432267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.432286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.445590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.445610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.459828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.459847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.473445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.473463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.487509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.487528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.501580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.501600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.515232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.515251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.528919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.528942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.542908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.542927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.557221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.557243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.568402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.568421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.811 [2024-11-19 11:20:15.582894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.811 [2024-11-19 11:20:15.582912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.070 [2024-11-19 11:20:15.596970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.070 [2024-11-19 11:20:15.596990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.070 [2024-11-19 11:20:15.611787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.070 [2024-11-19 11:20:15.611807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.070 [2024-11-19 11:20:15.626842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.070 [2024-11-19 11:20:15.626861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.070 [2024-11-19 11:20:15.641263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.070 [2024-11-19 11:20:15.641282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.070 [2024-11-19 11:20:15.652230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.070 [2024-11-19 11:20:15.652250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.070 [2024-11-19 11:20:15.667188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.070 [2024-11-19 11:20:15.667207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.070 [2024-11-19 11:20:15.683122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.070 [2024-11-19 11:20:15.683140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.070 [2024-11-19 11:20:15.697593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.070 [2024-11-19 11:20:15.697612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.070 [2024-11-19 11:20:15.709290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.070 [2024-11-19 11:20:15.709308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.071 [2024-11-19 11:20:15.723353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.071 [2024-11-19 11:20:15.723372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.071 [2024-11-19 11:20:15.737296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.071 [2024-11-19 11:20:15.737315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.071 [2024-11-19 11:20:15.751044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.071 [2024-11-19 11:20:15.751063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.071 [2024-11-19 11:20:15.765538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.071 [2024-11-19 11:20:15.765556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.071 [2024-11-19 11:20:15.776620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.071 [2024-11-19 11:20:15.776639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.071 [2024-11-19 11:20:15.791133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.071 [2024-11-19 11:20:15.791151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.071 [2024-11-19 11:20:15.805313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.071 [2024-11-19 11:20:15.805331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.071 [2024-11-19 11:20:15.816669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.071 [2024-11-19 11:20:15.816688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.071 [2024-11-19 11:20:15.831661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.071 [2024-11-19 11:20:15.831679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.071 [2024-11-19 11:20:15.846867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.071 [2024-11-19 11:20:15.846886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:15.861097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:15.861115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:15.874808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:15.874826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:15.888605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:15.888624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:15.903180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:15.903200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:15.913921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:15.913939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:15.928933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:15.928958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:15.944391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:15.944409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:15.953889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:15.953907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:15.962654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:15.962672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:15.977432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:15.977450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:15.993018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:15.993036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:16.007036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:16.007055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:16.020759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:16.020778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:16.034917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:16.034935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:16.044205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:16.044223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:16.058536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:16.058554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:16.072210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:16.072229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:16.086431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:16.086451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.330 [2024-11-19 11:20:16.099857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.330 [2024-11-19 11:20:16.099876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.113988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.114007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.128171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.128189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.139308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.139325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.148798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.148816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.163292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.163310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.173054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.173072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.187305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.187323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.196309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.196327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.205621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.205638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.220428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.220447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.229522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.229540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.244338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.244357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.258344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.258362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.272562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.272580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.283382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.283400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.298168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.298186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.312345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.312363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.323111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.323129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.337719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.337737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.352196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.352219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.589 [2024-11-19 11:20:16.362829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.589 [2024-11-19 11:20:16.362846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.848 [2024-11-19 11:20:16.377435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.848 [2024-11-19 11:20:16.377454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.848 16596.00 IOPS, 129.66 MiB/s [2024-11-19T10:20:16.629Z] [2024-11-19 11:20:16.390470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.848 [2024-11-19 11:20:16.390488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.848 00:09:02.848 Latency(us) 00:09:02.848 [2024-11-19T10:20:16.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.848 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:02.848 Nvme1n1 : 5.01 16596.41 129.66 0.00 0.00 7704.62 2906.38 16982.37 00:09:02.848 [2024-11-19T10:20:16.629Z] =================================================================================================================== 00:09:02.848 [2024-11-19T10:20:16.629Z] Total : 16596.41 129.66 0.00 0.00 7704.62 2906.38 16982.37 00:09:02.848 [2024-11-19 11:20:16.399523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.848 [2024-11-19 11:20:16.399539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.848 [2024-11-19 11:20:16.411553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.848 [2024-11-19 11:20:16.411567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.848 [2024-11-19 11:20:16.423591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.849 [2024-11-19 11:20:16.423610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.849 [2024-11-19 11:20:16.435620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.849 [2024-11-19 11:20:16.435636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.849 [2024-11-19 11:20:16.447651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.849 [2024-11-19 11:20:16.447667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.849 [2024-11-19 11:20:16.459681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.849 [2024-11-19 11:20:16.459696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.849 [2024-11-19 11:20:16.471713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.849 [2024-11-19 11:20:16.471735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.849 [2024-11-19 11:20:16.483747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.849 [2024-11-19 11:20:16.483760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.849 [2024-11-19 11:20:16.495778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.849 [2024-11-19 11:20:16.495791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.849 [2024-11-19 11:20:16.507806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.849 [2024-11-19 11:20:16.507816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.849 [2024-11-19 11:20:16.519846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.849 [2024-11-19 11:20:16.519860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.849 [2024-11-19 11:20:16.531873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.849 [2024-11-19 11:20:16.531885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.849 [2024-11-19 11:20:16.543903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.849 [2024-11-19 11:20:16.543913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2138665) - No such process 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2138665 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.849 delay0 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.849 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:03.108 [2024-11-19 11:20:16.738100] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:09.668 Initializing NVMe Controllers 00:09:09.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:09.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:09.668 Initialization complete. Launching workers. 00:09:09.668 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 721 00:09:09.668 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1008, failed to submit 33 00:09:09.668 success 823, unsuccessful 185, failed 0 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.668 rmmod nvme_tcp 00:09:09.668 rmmod nvme_fabrics 00:09:09.668 rmmod nvme_keyring 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2136815 ']' 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2136815 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2136815 ']' 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2136815 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.668 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2136815 00:09:09.668 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:09.668 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:09.668 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2136815' 00:09:09.668 killing process with pid 2136815 00:09:09.668 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2136815 00:09:09.668 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2136815 00:09:09.668 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.668 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.668 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.668 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:09.669 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:09.669 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.669 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.669 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.669 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:09.669 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.669 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.669 11:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.579 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:11.579 00:09:11.579 real 0m31.402s 00:09:11.579 user 0m41.767s 00:09:11.579 sys 0m11.198s 00:09:11.579 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.579 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.579 ************************************ 00:09:11.579 END TEST nvmf_zcopy 00:09:11.579 ************************************ 00:09:11.579 11:20:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:11.579 11:20:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.579 11:20:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.579 11:20:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.579 ************************************ 00:09:11.579 START TEST nvmf_nmic 00:09:11.579 ************************************ 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:11.840 * Looking for test storage... 00:09:11.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.840 --rc genhtml_branch_coverage=1 00:09:11.840 --rc genhtml_function_coverage=1 00:09:11.840 --rc genhtml_legend=1 00:09:11.840 --rc geninfo_all_blocks=1 00:09:11.840 --rc geninfo_unexecuted_blocks=1 00:09:11.840 00:09:11.840 ' 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.840 --rc genhtml_branch_coverage=1 00:09:11.840 --rc genhtml_function_coverage=1 00:09:11.840 --rc genhtml_legend=1 00:09:11.840 --rc geninfo_all_blocks=1 00:09:11.840 --rc geninfo_unexecuted_blocks=1 00:09:11.840 00:09:11.840 ' 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.840 --rc genhtml_branch_coverage=1 00:09:11.840 --rc genhtml_function_coverage=1 00:09:11.840 --rc genhtml_legend=1 00:09:11.840 --rc geninfo_all_blocks=1 00:09:11.840 --rc geninfo_unexecuted_blocks=1 00:09:11.840 00:09:11.840 ' 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.840 --rc genhtml_branch_coverage=1 00:09:11.840 --rc genhtml_function_coverage=1 00:09:11.840 --rc genhtml_legend=1 00:09:11.840 --rc geninfo_all_blocks=1 00:09:11.840 --rc geninfo_unexecuted_blocks=1 00:09:11.840 00:09:11.840 ' 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.840 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:11.841 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:18.417 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.417 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:18.418 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:18.418 Found net devices under 0000:86:00.0: cvl_0_0 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:18.418 Found net devices under 0000:86:00.1: cvl_0_1 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:18.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:09:18.418 00:09:18.418 --- 10.0.0.2 ping statistics --- 00:09:18.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.418 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:09:18.418 00:09:18.418 --- 10.0.0.1 ping statistics --- 00:09:18.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.418 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2144263 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2144263 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2144263 ']' 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.418 [2024-11-19 11:20:31.582470] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:18.418 [2024-11-19 11:20:31.582516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.418 [2024-11-19 11:20:31.663231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.418 [2024-11-19 11:20:31.706625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.418 [2024-11-19 11:20:31.706665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.418 [2024-11-19 11:20:31.706672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.418 [2024-11-19 11:20:31.706678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.418 [2024-11-19 11:20:31.706683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.418 [2024-11-19 11:20:31.708281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.418 [2024-11-19 11:20:31.708394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.418 [2024-11-19 11:20:31.708521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.418 [2024-11-19 11:20:31.708523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.418 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.418 [2024-11-19 11:20:31.849951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 Malloc0 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 [2024-11-19 11:20:31.914680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:18.419 test case1: single bdev can't be used in multiple subsystems 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 [2024-11-19 11:20:31.946605] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:18.419 [2024-11-19 11:20:31.946628] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:18.419 [2024-11-19 11:20:31.946636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.419 request: 00:09:18.419 { 00:09:18.419 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:18.419 "namespace": { 00:09:18.419 "bdev_name": "Malloc0", 00:09:18.419 "no_auto_visible": false 00:09:18.419 }, 00:09:18.419 "method": "nvmf_subsystem_add_ns", 00:09:18.419 "req_id": 1 00:09:18.419 } 00:09:18.419 Got JSON-RPC error response 00:09:18.419 response: 00:09:18.419 { 00:09:18.419 "code": -32602, 00:09:18.419 "message": "Invalid parameters" 00:09:18.419 } 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:18.419 Adding namespace failed - expected result. 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:18.419 test case2: host connect to nvmf target in multiple paths 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.419 [2024-11-19 11:20:31.958750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.419 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.796 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:20.732 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.732 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:20.732 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.732 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:20.732 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:22.636 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:22.636 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:22.636 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.636 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:22.636 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.636 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:22.636 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:22.636 [global] 00:09:22.636 thread=1 00:09:22.636 invalidate=1 00:09:22.636 rw=write 00:09:22.636 time_based=1 00:09:22.636 runtime=1 00:09:22.636 ioengine=libaio 00:09:22.636 direct=1 00:09:22.636 bs=4096 00:09:22.636 iodepth=1 00:09:22.636 norandommap=0 00:09:22.636 numjobs=1 00:09:22.636 00:09:22.636 verify_dump=1 00:09:22.636 verify_backlog=512 00:09:22.636 verify_state_save=0 00:09:22.636 do_verify=1 00:09:22.636 verify=crc32c-intel 00:09:22.636 [job0] 00:09:22.636 filename=/dev/nvme0n1 00:09:22.636 Could not set queue depth (nvme0n1) 00:09:22.895 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.895 fio-3.35 00:09:22.895 Starting 1 thread 00:09:24.272 00:09:24.272 job0: (groupid=0, jobs=1): err= 0: pid=2145169: Tue Nov 19 11:20:37 2024 00:09:24.272 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:24.272 slat (nsec): min=6959, max=42667, avg=8109.35, stdev=1599.20 00:09:24.272 clat (usec): min=155, max=469, avg=200.77, stdev=30.52 00:09:24.272 lat (usec): min=164, max=500, avg=208.88, stdev=30.76 00:09:24.272 clat percentiles (usec): 00:09:24.272 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:09:24.272 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 200], 00:09:24.272 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 249], 95.00th=[ 262], 00:09:24.272 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 396], 99.95th=[ 449], 00:09:24.272 | 99.99th=[ 469] 00:09:24.272 write: IOPS=2788, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec); 0 zone resets 00:09:24.272 slat (nsec): min=10048, max=45406, avg=11296.90, stdev=1751.05 00:09:24.272 clat (usec): min=115, max=339, avg=149.62, stdev=22.04 00:09:24.272 lat (usec): min=126, max=383, avg=160.92, stdev=22.46 00:09:24.272 clat percentiles (usec): 00:09:24.272 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 129], 00:09:24.272 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 151], 60.00th=[ 159], 00:09:24.272 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:09:24.272 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 253], 99.95th=[ 302], 00:09:24.272 | 99.99th=[ 338] 00:09:24.272 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:24.272 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:24.272 lat (usec) : 250=95.27%, 500=4.73% 00:09:24.272 cpu : usr=4.70%, sys=8.00%, ctx=5351, majf=0, minf=1 00:09:24.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.272 issued rwts: total=2560,2791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.272 00:09:24.272 Run status group 0 (all jobs): 00:09:24.272 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:09:24.272 WRITE: bw=10.9MiB/s (11.4MB/s), 10.9MiB/s-10.9MiB/s (11.4MB/s-11.4MB/s), io=10.9MiB (11.4MB), run=1001-1001msec 00:09:24.272 00:09:24.272 Disk stats (read/write): 00:09:24.272 nvme0n1: ios=2281/2560, merge=0/0, ticks=443/357, in_queue=800, util=91.18% 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.272 rmmod nvme_tcp 00:09:24.272 rmmod nvme_fabrics 00:09:24.272 rmmod nvme_keyring 00:09:24.272 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.272 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:24.272 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:24.272 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2144263 ']' 00:09:24.272 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2144263 00:09:24.272 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2144263 ']' 00:09:24.272 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2144263 00:09:24.272 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:24.272 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.272 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2144263 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2144263' 00:09:24.532 killing process with pid 2144263 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2144263 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2144263 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.532 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:27.071 00:09:27.071 real 0m14.964s 00:09:27.071 user 0m33.158s 00:09:27.071 sys 0m5.438s 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.071 ************************************ 00:09:27.071 END TEST nvmf_nmic 00:09:27.071 ************************************ 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.071 ************************************ 00:09:27.071 START TEST nvmf_fio_target 00:09:27.071 ************************************ 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:27.071 * Looking for test storage... 00:09:27.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:27.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.071 --rc genhtml_branch_coverage=1 00:09:27.071 --rc genhtml_function_coverage=1 00:09:27.071 --rc genhtml_legend=1 00:09:27.071 --rc geninfo_all_blocks=1 00:09:27.071 --rc geninfo_unexecuted_blocks=1 00:09:27.071 00:09:27.071 ' 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:27.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.071 --rc genhtml_branch_coverage=1 00:09:27.071 --rc genhtml_function_coverage=1 00:09:27.071 --rc genhtml_legend=1 00:09:27.071 --rc geninfo_all_blocks=1 00:09:27.071 --rc geninfo_unexecuted_blocks=1 00:09:27.071 00:09:27.071 ' 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:27.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.071 --rc genhtml_branch_coverage=1 00:09:27.071 --rc genhtml_function_coverage=1 00:09:27.071 --rc genhtml_legend=1 00:09:27.071 --rc geninfo_all_blocks=1 00:09:27.071 --rc geninfo_unexecuted_blocks=1 00:09:27.071 00:09:27.071 ' 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:27.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.071 --rc genhtml_branch_coverage=1 00:09:27.071 --rc genhtml_function_coverage=1 00:09:27.071 --rc genhtml_legend=1 00:09:27.071 --rc geninfo_all_blocks=1 00:09:27.071 --rc geninfo_unexecuted_blocks=1 00:09:27.071 00:09:27.071 ' 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.071 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.072 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.642 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:33.643 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:33.643 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:33.643 Found net devices under 0000:86:00.0: cvl_0_0 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:33.643 Found net devices under 0000:86:00.1: cvl_0_1 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:09:33.643 00:09:33.643 --- 10.0.0.2 ping statistics --- 00:09:33.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.643 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:09:33.643 00:09:33.643 --- 10.0.0.1 ping statistics --- 00:09:33.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.643 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2148981 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2148981 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2148981 ']' 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.643 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.643 [2024-11-19 11:20:46.632366] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:33.643 [2024-11-19 11:20:46.632416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.643 [2024-11-19 11:20:46.714593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.643 [2024-11-19 11:20:46.756466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.643 [2024-11-19 11:20:46.756504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.643 [2024-11-19 11:20:46.756512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.643 [2024-11-19 11:20:46.756518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.643 [2024-11-19 11:20:46.756523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.643 [2024-11-19 11:20:46.758117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.643 [2024-11-19 11:20:46.758223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.643 [2024-11-19 11:20:46.758339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.644 [2024-11-19 11:20:46.758340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.644 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.644 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:33.644 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.644 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.644 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.644 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.644 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:33.644 [2024-11-19 11:20:47.067678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.644 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.644 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:33.644 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.903 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:33.903 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.161 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:34.161 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.420 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:34.420 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:34.420 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.679 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:34.679 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.938 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:34.938 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.197 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:35.197 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:35.456 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:35.456 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:35.456 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.716 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:35.716 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:35.976 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.235 [2024-11-19 11:20:49.778733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.235 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:36.235 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:36.494 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:37.871 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:37.871 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:37.871 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.871 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:37.871 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:37.871 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:39.776 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:39.776 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:39.776 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.776 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:39.776 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.776 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:39.776 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:39.776 [global] 00:09:39.776 thread=1 00:09:39.776 invalidate=1 00:09:39.776 rw=write 00:09:39.776 time_based=1 00:09:39.776 runtime=1 00:09:39.776 ioengine=libaio 00:09:39.776 direct=1 00:09:39.776 bs=4096 00:09:39.776 iodepth=1 00:09:39.776 norandommap=0 00:09:39.776 numjobs=1 00:09:39.776 00:09:39.776 verify_dump=1 00:09:39.776 verify_backlog=512 00:09:39.776 verify_state_save=0 00:09:39.776 do_verify=1 00:09:39.776 verify=crc32c-intel 00:09:39.776 [job0] 00:09:39.776 filename=/dev/nvme0n1 00:09:39.776 [job1] 00:09:39.776 filename=/dev/nvme0n2 00:09:39.776 [job2] 00:09:39.776 filename=/dev/nvme0n3 00:09:39.776 [job3] 00:09:39.776 filename=/dev/nvme0n4 00:09:39.776 Could not set queue depth (nvme0n1) 00:09:39.776 Could not set queue depth (nvme0n2) 00:09:39.776 Could not set queue depth (nvme0n3) 00:09:39.776 Could not set queue depth (nvme0n4) 00:09:40.035 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.035 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.035 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.035 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.035 fio-3.35 00:09:40.035 Starting 4 threads 00:09:41.412 00:09:41.412 job0: (groupid=0, jobs=1): err= 0: pid=2150455: Tue Nov 19 11:20:54 2024 00:09:41.412 read: IOPS=1030, BW=4123KiB/s (4222kB/s)(4148KiB/1006msec) 00:09:41.412 slat (nsec): min=6543, max=29244, avg=7550.49, stdev=2002.42 00:09:41.412 clat (usec): min=165, max=41007, avg=705.89, stdev=4367.36 00:09:41.412 lat (usec): min=172, max=41029, avg=713.44, stdev=4368.95 00:09:41.412 clat percentiles (usec): 00:09:41.412 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:09:41.412 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 227], 60.00th=[ 239], 00:09:41.412 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 269], 00:09:41.412 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:41.412 | 99.99th=[41157] 00:09:41.412 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:09:41.412 slat (usec): min=9, max=582, avg=11.41, stdev=14.68 00:09:41.412 clat (usec): min=111, max=304, avg=158.10, stdev=22.23 00:09:41.412 lat (usec): min=121, max=759, avg=169.51, stdev=27.11 00:09:41.412 clat percentiles (usec): 00:09:41.412 | 1.00th=[ 124], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 141], 00:09:41.412 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:09:41.412 | 70.00th=[ 167], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 198], 00:09:41.412 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 285], 99.95th=[ 306], 00:09:41.412 | 99.99th=[ 306] 00:09:41.412 bw ( KiB/s): min= 120, max=12168, per=30.18%, avg=6144.00, stdev=8519.22, samples=2 00:09:41.412 iops : min= 30, max= 3042, avg=1536.00, stdev=2129.81, samples=2 00:09:41.412 lat (usec) : 250=90.94%, 500=8.55% 00:09:41.412 lat (msec) : 20=0.04%, 50=0.47% 00:09:41.412 cpu : usr=1.29%, sys=2.69%, ctx=2576, majf=0, minf=1 00:09:41.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.412 issued rwts: total=1037,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.412 job1: (groupid=0, jobs=1): err= 0: pid=2150456: Tue Nov 19 11:20:54 2024 00:09:41.412 read: IOPS=165, BW=663KiB/s (679kB/s)(664KiB/1001msec) 00:09:41.412 slat (nsec): min=7450, max=26793, avg=10294.54, stdev=5097.61 00:09:41.412 clat (usec): min=190, max=41052, avg=5385.67, stdev=13581.55 00:09:41.412 lat (usec): min=198, max=41074, avg=5395.96, stdev=13586.19 00:09:41.412 clat percentiles (usec): 00:09:41.412 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 215], 00:09:41.412 | 30.00th=[ 221], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 245], 00:09:41.412 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[41157], 95.00th=[41157], 00:09:41.412 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:41.412 | 99.99th=[41157] 00:09:41.412 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:41.412 slat (usec): min=3, max=460, avg=13.33, stdev=20.16 00:09:41.412 clat (usec): min=129, max=568, avg=186.37, stdev=28.81 00:09:41.412 lat (usec): min=133, max=648, avg=199.70, stdev=35.18 00:09:41.412 clat percentiles (usec): 00:09:41.412 | 1.00th=[ 139], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 172], 00:09:41.412 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:09:41.412 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 221], 00:09:41.412 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[ 570], 99.95th=[ 570], 00:09:41.412 | 99.99th=[ 570] 00:09:41.412 bw ( KiB/s): min= 4096, max= 4096, per=20.12%, avg=4096.00, stdev= 0.00, samples=1 00:09:41.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:41.412 lat (usec) : 250=89.97%, 500=6.78%, 750=0.15% 00:09:41.412 lat (msec) : 50=3.10% 00:09:41.412 cpu : usr=0.70%, sys=1.00%, ctx=680, majf=0, minf=1 00:09:41.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.412 issued rwts: total=166,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.412 job2: (groupid=0, jobs=1): err= 0: pid=2150457: Tue Nov 19 11:20:54 2024 00:09:41.412 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:09:41.412 slat (nsec): min=10434, max=28241, avg=22285.50, stdev=3025.34 00:09:41.412 clat (usec): min=40900, max=42018, avg=41012.71, stdev=227.01 00:09:41.412 lat (usec): min=40922, max=42046, avg=41034.99, stdev=228.23 00:09:41.412 clat percentiles (usec): 00:09:41.412 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:41.412 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:41.412 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:41.412 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:41.413 | 99.99th=[42206] 00:09:41.413 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:09:41.413 slat (nsec): min=10590, max=40485, avg=11955.65, stdev=2351.87 00:09:41.413 clat (usec): min=144, max=676, avg=179.38, stdev=42.67 00:09:41.413 lat (usec): min=155, max=703, avg=191.33, stdev=43.41 00:09:41.413 clat percentiles (usec): 00:09:41.413 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:09:41.413 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 176], 00:09:41.413 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 204], 95.00th=[ 251], 00:09:41.413 | 99.00th=[ 306], 99.50th=[ 529], 99.90th=[ 676], 99.95th=[ 676], 00:09:41.413 | 99.99th=[ 676] 00:09:41.413 bw ( KiB/s): min= 4096, max= 4096, per=20.12%, avg=4096.00, stdev= 0.00, samples=1 00:09:41.413 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:41.413 lat (usec) : 250=91.01%, 500=4.31%, 750=0.56% 00:09:41.413 lat (msec) : 50=4.12% 00:09:41.413 cpu : usr=1.00%, sys=0.40%, ctx=535, majf=0, minf=1 00:09:41.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.413 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.413 job3: (groupid=0, jobs=1): err= 0: pid=2150458: Tue Nov 19 11:20:54 2024 00:09:41.413 read: IOPS=2248, BW=8995KiB/s (9211kB/s)(9004KiB/1001msec) 00:09:41.413 slat (nsec): min=7653, max=33836, avg=8699.63, stdev=1246.28 00:09:41.413 clat (usec): min=165, max=293, avg=211.54, stdev=17.45 00:09:41.413 lat (usec): min=173, max=302, avg=220.24, stdev=17.52 00:09:41.413 clat percentiles (usec): 00:09:41.413 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:09:41.413 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:09:41.413 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 241], 00:09:41.413 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 293], 99.95th=[ 293], 00:09:41.413 | 99.99th=[ 293] 00:09:41.413 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:41.413 slat (usec): min=11, max=641, avg=12.98, stdev=12.57 00:09:41.413 clat (usec): min=124, max=330, avg=178.25, stdev=36.37 00:09:41.413 lat (usec): min=136, max=845, avg=191.23, stdev=38.94 00:09:41.413 clat percentiles (usec): 00:09:41.413 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 145], 00:09:41.413 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 169], 60.00th=[ 180], 00:09:41.413 | 70.00th=[ 192], 80.00th=[ 229], 90.00th=[ 239], 95.00th=[ 241], 00:09:41.413 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 269], 99.95th=[ 322], 00:09:41.413 | 99.99th=[ 330] 00:09:41.413 bw ( KiB/s): min= 9904, max= 9904, per=48.65%, avg=9904.00, stdev= 0.00, samples=1 00:09:41.413 iops : min= 2476, max= 2476, avg=2476.00, stdev= 0.00, samples=1 00:09:41.413 lat (usec) : 250=98.75%, 500=1.25% 00:09:41.413 cpu : usr=4.20%, sys=7.80%, ctx=4813, majf=0, minf=1 00:09:41.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.413 issued rwts: total=2251,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.413 00:09:41.413 Run status group 0 (all jobs): 00:09:41.413 READ: bw=13.5MiB/s (14.2MB/s), 87.7KiB/s-8995KiB/s (89.8kB/s-9211kB/s), io=13.6MiB (14.2MB), run=1001-1006msec 00:09:41.413 WRITE: bw=19.9MiB/s (20.8MB/s), 2042KiB/s-9.99MiB/s (2091kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1006msec 00:09:41.413 00:09:41.413 Disk stats (read/write): 00:09:41.413 nvme0n1: ios=1094/1536, merge=0/0, ticks=642/227, in_queue=869, util=85.67% 00:09:41.413 nvme0n2: ios=71/512, merge=0/0, ticks=821/97, in_queue=918, util=89.94% 00:09:41.413 nvme0n3: ios=75/512, merge=0/0, ticks=816/85, in_queue=901, util=94.79% 00:09:41.413 nvme0n4: ios=2036/2048, merge=0/0, ticks=479/353, in_queue=832, util=94.22% 00:09:41.413 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:41.413 [global] 00:09:41.413 thread=1 00:09:41.413 invalidate=1 00:09:41.413 rw=randwrite 00:09:41.413 time_based=1 00:09:41.413 runtime=1 00:09:41.413 ioengine=libaio 00:09:41.413 direct=1 00:09:41.413 bs=4096 00:09:41.413 iodepth=1 00:09:41.413 norandommap=0 00:09:41.413 numjobs=1 00:09:41.413 00:09:41.413 verify_dump=1 00:09:41.413 verify_backlog=512 00:09:41.413 verify_state_save=0 00:09:41.413 do_verify=1 00:09:41.413 verify=crc32c-intel 00:09:41.413 [job0] 00:09:41.413 filename=/dev/nvme0n1 00:09:41.413 [job1] 00:09:41.413 filename=/dev/nvme0n2 00:09:41.413 [job2] 00:09:41.413 filename=/dev/nvme0n3 00:09:41.413 [job3] 00:09:41.413 filename=/dev/nvme0n4 00:09:41.413 Could not set queue depth (nvme0n1) 00:09:41.413 Could not set queue depth (nvme0n2) 00:09:41.413 Could not set queue depth (nvme0n3) 00:09:41.413 Could not set queue depth (nvme0n4) 00:09:41.672 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.672 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.672 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.672 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.672 fio-3.35 00:09:41.672 Starting 4 threads 00:09:43.140 00:09:43.140 job0: (groupid=0, jobs=1): err= 0: pid=2150832: Tue Nov 19 11:20:56 2024 00:09:43.140 read: IOPS=146, BW=584KiB/s (598kB/s)(592KiB/1013msec) 00:09:43.140 slat (nsec): min=6813, max=26562, avg=9655.15, stdev=5102.81 00:09:43.140 clat (usec): min=200, max=41042, avg=6114.08, stdev=14222.13 00:09:43.140 lat (usec): min=207, max=41064, avg=6123.73, stdev=14226.72 00:09:43.140 clat percentiles (usec): 00:09:43.140 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 237], 20.00th=[ 285], 00:09:43.140 | 30.00th=[ 314], 40.00th=[ 347], 50.00th=[ 404], 60.00th=[ 416], 00:09:43.140 | 70.00th=[ 420], 80.00th=[ 433], 90.00th=[41157], 95.00th=[41157], 00:09:43.140 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:43.140 | 99.99th=[41157] 00:09:43.140 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:09:43.140 slat (nsec): min=9776, max=38516, avg=12610.61, stdev=2586.71 00:09:43.140 clat (usec): min=140, max=327, avg=188.53, stdev=37.18 00:09:43.140 lat (usec): min=153, max=354, avg=201.14, stdev=36.91 00:09:43.140 clat percentiles (usec): 00:09:43.140 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:09:43.140 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 184], 00:09:43.140 | 70.00th=[ 196], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 255], 00:09:43.140 | 99.00th=[ 318], 99.50th=[ 318], 99.90th=[ 326], 99.95th=[ 326], 00:09:43.140 | 99.99th=[ 326] 00:09:43.140 bw ( KiB/s): min= 4087, max= 4087, per=24.61%, avg=4087.00, stdev= 0.00, samples=1 00:09:43.140 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:43.140 lat (usec) : 250=76.52%, 500=20.15%, 750=0.15% 00:09:43.140 lat (msec) : 50=3.18% 00:09:43.140 cpu : usr=0.69%, sys=0.79%, ctx=663, majf=0, minf=1 00:09:43.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.140 issued rwts: total=148,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.140 job1: (groupid=0, jobs=1): err= 0: pid=2150835: Tue Nov 19 11:20:56 2024 00:09:43.140 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:43.140 slat (nsec): min=6709, max=26585, avg=8084.19, stdev=3020.48 00:09:43.140 clat (usec): min=178, max=41982, avg=1666.70, stdev=7560.73 00:09:43.140 lat (usec): min=185, max=42005, avg=1674.78, stdev=7563.49 00:09:43.140 clat percentiles (usec): 00:09:43.140 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 208], 00:09:43.140 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:09:43.140 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 281], 00:09:43.140 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:43.140 | 99.99th=[42206] 00:09:43.140 write: IOPS=675, BW=2701KiB/s (2766kB/s)(2704KiB/1001msec); 0 zone resets 00:09:43.140 slat (nsec): min=9383, max=63326, avg=12014.75, stdev=2780.70 00:09:43.140 clat (usec): min=115, max=322, avg=192.87, stdev=41.78 00:09:43.140 lat (usec): min=125, max=354, avg=204.88, stdev=42.56 00:09:43.140 clat percentiles (usec): 00:09:43.140 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 139], 20.00th=[ 149], 00:09:43.140 | 30.00th=[ 165], 40.00th=[ 178], 50.00th=[ 188], 60.00th=[ 202], 00:09:43.140 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 241], 95.00th=[ 243], 00:09:43.140 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 322], 99.95th=[ 322], 00:09:43.140 | 99.99th=[ 322] 00:09:43.140 bw ( KiB/s): min= 4087, max= 4087, per=24.61%, avg=4087.00, stdev= 0.00, samples=1 00:09:43.140 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:43.140 lat (usec) : 250=93.52%, 500=4.88%, 750=0.08% 00:09:43.140 lat (msec) : 50=1.52% 00:09:43.140 cpu : usr=0.50%, sys=1.30%, ctx=1190, majf=0, minf=1 00:09:43.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.140 issued rwts: total=512,676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.140 job2: (groupid=0, jobs=1): err= 0: pid=2150836: Tue Nov 19 11:20:56 2024 00:09:43.140 read: IOPS=31, BW=125KiB/s (128kB/s)(128KiB/1026msec) 00:09:43.140 slat (nsec): min=7472, max=24824, avg=16575.16, stdev=6679.15 00:09:43.140 clat (usec): min=237, max=42028, avg=28500.55, stdev=19337.70 00:09:43.140 lat (usec): min=245, max=42041, avg=28517.13, stdev=19343.58 00:09:43.140 clat percentiles (usec): 00:09:43.140 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 297], 00:09:43.140 | 30.00th=[ 318], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:43.140 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:43.140 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:43.140 | 99.99th=[42206] 00:09:43.140 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:09:43.140 slat (nsec): min=11231, max=44280, avg=12755.70, stdev=1793.20 00:09:43.140 clat (usec): min=135, max=314, avg=202.72, stdev=31.57 00:09:43.140 lat (usec): min=148, max=358, avg=215.48, stdev=31.67 00:09:43.140 clat percentiles (usec): 00:09:43.140 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 163], 20.00th=[ 176], 00:09:43.140 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 208], 00:09:43.140 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 251], 00:09:43.140 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 314], 99.95th=[ 314], 00:09:43.140 | 99.99th=[ 314] 00:09:43.140 bw ( KiB/s): min= 4087, max= 4087, per=24.61%, avg=4087.00, stdev= 0.00, samples=1 00:09:43.140 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:43.140 lat (usec) : 250=89.52%, 500=6.43% 00:09:43.140 lat (msec) : 50=4.04% 00:09:43.140 cpu : usr=0.39%, sys=0.59%, ctx=545, majf=0, minf=1 00:09:43.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.140 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.140 job3: (groupid=0, jobs=1): err= 0: pid=2150837: Tue Nov 19 11:20:56 2024 00:09:43.140 read: IOPS=2278, BW=9115KiB/s (9334kB/s)(9124KiB/1001msec) 00:09:43.140 slat (nsec): min=7254, max=41743, avg=8504.86, stdev=2001.97 00:09:43.140 clat (usec): min=178, max=618, avg=235.66, stdev=27.72 00:09:43.140 lat (usec): min=187, max=626, avg=244.16, stdev=27.92 00:09:43.140 clat percentiles (usec): 00:09:43.140 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:09:43.140 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:09:43.140 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 281], 00:09:43.140 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 367], 99.95th=[ 429], 00:09:43.140 | 99.99th=[ 619] 00:09:43.140 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:43.140 slat (nsec): min=10170, max=38981, avg=11365.89, stdev=1416.90 00:09:43.140 clat (usec): min=117, max=253, avg=156.16, stdev=19.79 00:09:43.140 lat (usec): min=127, max=292, avg=167.52, stdev=20.10 00:09:43.140 clat percentiles (usec): 00:09:43.140 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 139], 00:09:43.140 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 157], 00:09:43.140 | 70.00th=[ 163], 80.00th=[ 174], 90.00th=[ 188], 95.00th=[ 196], 00:09:43.140 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 227], 99.95th=[ 235], 00:09:43.140 | 99.99th=[ 253] 00:09:43.140 bw ( KiB/s): min=11449, max=11449, per=68.94%, avg=11449.00, stdev= 0.00, samples=1 00:09:43.140 iops : min= 2862, max= 2862, avg=2862.00, stdev= 0.00, samples=1 00:09:43.140 lat (usec) : 250=89.34%, 500=10.64%, 750=0.02% 00:09:43.140 cpu : usr=3.80%, sys=7.90%, ctx=4841, majf=0, minf=2 00:09:43.141 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.141 issued rwts: total=2281,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.141 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.141 00:09:43.141 Run status group 0 (all jobs): 00:09:43.141 READ: bw=11.3MiB/s (11.9MB/s), 125KiB/s-9115KiB/s (128kB/s-9334kB/s), io=11.6MiB (12.2MB), run=1001-1026msec 00:09:43.141 WRITE: bw=16.2MiB/s (17.0MB/s), 1996KiB/s-9.99MiB/s (2044kB/s-10.5MB/s), io=16.6MiB (17.4MB), run=1001-1026msec 00:09:43.141 00:09:43.141 Disk stats (read/write): 00:09:43.141 nvme0n1: ios=179/512, merge=0/0, ticks=1655/95, in_queue=1750, util=96.69% 00:09:43.141 nvme0n2: ios=51/512, merge=0/0, ticks=1284/104, in_queue=1388, util=96.75% 00:09:43.141 nvme0n3: ios=58/512, merge=0/0, ticks=1059/101, in_queue=1160, util=100.00% 00:09:43.141 nvme0n4: ios=2048/2089, merge=0/0, ticks=444/327, in_queue=771, util=89.73% 00:09:43.141 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:43.141 [global] 00:09:43.141 thread=1 00:09:43.141 invalidate=1 00:09:43.141 rw=write 00:09:43.141 time_based=1 00:09:43.141 runtime=1 00:09:43.141 ioengine=libaio 00:09:43.141 direct=1 00:09:43.141 bs=4096 00:09:43.141 iodepth=128 00:09:43.141 norandommap=0 00:09:43.141 numjobs=1 00:09:43.141 00:09:43.141 verify_dump=1 00:09:43.141 verify_backlog=512 00:09:43.141 verify_state_save=0 00:09:43.141 do_verify=1 00:09:43.141 verify=crc32c-intel 00:09:43.141 [job0] 00:09:43.141 filename=/dev/nvme0n1 00:09:43.141 [job1] 00:09:43.141 filename=/dev/nvme0n2 00:09:43.141 [job2] 00:09:43.141 filename=/dev/nvme0n3 00:09:43.141 [job3] 00:09:43.141 filename=/dev/nvme0n4 00:09:43.141 Could not set queue depth (nvme0n1) 00:09:43.141 Could not set queue depth (nvme0n2) 00:09:43.141 Could not set queue depth (nvme0n3) 00:09:43.141 Could not set queue depth (nvme0n4) 00:09:43.432 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.432 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.432 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.432 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.432 fio-3.35 00:09:43.432 Starting 4 threads 00:09:44.820 00:09:44.820 job0: (groupid=0, jobs=1): err= 0: pid=2151209: Tue Nov 19 11:20:58 2024 00:09:44.820 read: IOPS=5739, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1002msec) 00:09:44.820 slat (nsec): min=1329, max=5811.2k, avg=82687.88, stdev=479072.61 00:09:44.820 clat (usec): min=1149, max=16364, avg=10327.23, stdev=1537.52 00:09:44.820 lat (usec): min=1994, max=16457, avg=10409.92, stdev=1581.73 00:09:44.820 clat percentiles (usec): 00:09:44.820 | 1.00th=[ 5145], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 9765], 00:09:44.820 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:09:44.820 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11863], 95.00th=[13042], 00:09:44.820 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15008], 99.95th=[15401], 00:09:44.820 | 99.99th=[16319] 00:09:44.820 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:09:44.820 slat (usec): min=2, max=21189, avg=77.75, stdev=466.01 00:09:44.820 clat (usec): min=617, max=41332, avg=10988.54, stdev=3281.97 00:09:44.820 lat (usec): min=625, max=41365, avg=11066.29, stdev=3312.44 00:09:44.820 clat percentiles (usec): 00:09:44.820 | 1.00th=[ 4015], 5.00th=[ 7439], 10.00th=[ 8848], 20.00th=[ 9896], 00:09:44.820 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10683], 00:09:44.820 | 70.00th=[10814], 80.00th=[11207], 90.00th=[12518], 95.00th=[14353], 00:09:44.820 | 99.00th=[27395], 99.50th=[27657], 99.90th=[27657], 99.95th=[28443], 00:09:44.820 | 99.99th=[41157] 00:09:44.820 bw ( KiB/s): min=24512, max=24576, per=35.63%, avg=24544.00, stdev=45.25, samples=2 00:09:44.820 iops : min= 6128, max= 6144, avg=6136.00, stdev=11.31, samples=2 00:09:44.820 lat (usec) : 750=0.03%, 1000=0.02% 00:09:44.820 lat (msec) : 2=0.22%, 4=0.30%, 10=22.92%, 20=74.38%, 50=2.13% 00:09:44.820 cpu : usr=3.60%, sys=5.29%, ctx=725, majf=0, minf=1 00:09:44.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:44.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.820 issued rwts: total=5751,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.820 job1: (groupid=0, jobs=1): err= 0: pid=2151213: Tue Nov 19 11:20:58 2024 00:09:44.820 read: IOPS=3550, BW=13.9MiB/s (14.5MB/s)(14.5MiB/1045msec) 00:09:44.820 slat (nsec): min=1385, max=15023k, avg=123216.17, stdev=818024.18 00:09:44.820 clat (usec): min=4233, max=82016, avg=15644.58, stdev=11910.73 00:09:44.820 lat (usec): min=4882, max=82026, avg=15767.79, stdev=11967.26 00:09:44.820 clat percentiles (usec): 00:09:44.820 | 1.00th=[ 5997], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10290], 00:09:44.820 | 30.00th=[10552], 40.00th=[11994], 50.00th=[12780], 60.00th=[13304], 00:09:44.820 | 70.00th=[14615], 80.00th=[16909], 90.00th=[19268], 95.00th=[35390], 00:09:44.820 | 99.00th=[74974], 99.50th=[79168], 99.90th=[82314], 99.95th=[82314], 00:09:44.820 | 99.99th=[82314] 00:09:44.820 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:09:44.820 slat (usec): min=2, max=10672, avg=126.73, stdev=613.49 00:09:44.820 clat (usec): min=1387, max=82031, avg=18192.25, stdev=11702.01 00:09:44.820 lat (usec): min=1398, max=82042, avg=18318.98, stdev=11780.21 00:09:44.820 clat percentiles (usec): 00:09:44.820 | 1.00th=[ 1827], 5.00th=[ 5669], 10.00th=[ 8225], 20.00th=[10421], 00:09:44.820 | 30.00th=[10683], 40.00th=[11600], 50.00th=[12649], 60.00th=[17433], 00:09:44.820 | 70.00th=[22152], 80.00th=[22414], 90.00th=[39060], 95.00th=[44303], 00:09:44.820 | 99.00th=[50070], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:09:44.820 | 99.99th=[82314] 00:09:44.820 bw ( KiB/s): min=12280, max=20480, per=23.78%, avg=16380.00, stdev=5798.28, samples=2 00:09:44.820 iops : min= 3070, max= 5120, avg=4095.00, stdev=1449.57, samples=2 00:09:44.820 lat (msec) : 2=0.55%, 4=0.60%, 10=14.91%, 20=60.88%, 50=20.43% 00:09:44.820 lat (msec) : 100=2.63% 00:09:44.820 cpu : usr=3.35%, sys=4.41%, ctx=502, majf=0, minf=1 00:09:44.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:44.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.820 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.820 job2: (groupid=0, jobs=1): err= 0: pid=2151216: Tue Nov 19 11:20:58 2024 00:09:44.820 read: IOPS=2664, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1005msec) 00:09:44.820 slat (nsec): min=1142, max=21619k, avg=161554.34, stdev=1159253.76 00:09:44.820 clat (usec): min=2269, max=59497, avg=20046.70, stdev=10325.32 00:09:44.820 lat (usec): min=4639, max=59524, avg=20208.26, stdev=10424.63 00:09:44.820 clat percentiles (usec): 00:09:44.820 | 1.00th=[ 8160], 5.00th=[ 8455], 10.00th=[13304], 20.00th=[15139], 00:09:44.820 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:09:44.820 | 70.00th=[16712], 80.00th=[23725], 90.00th=[38536], 95.00th=[46400], 00:09:44.820 | 99.00th=[48497], 99.50th=[49546], 99.90th=[55313], 99.95th=[58459], 00:09:44.821 | 99.99th=[59507] 00:09:44.821 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:44.821 slat (nsec): min=1975, max=49079k, avg=171228.80, stdev=1361288.93 00:09:44.821 clat (usec): min=2122, max=73065, avg=23463.20, stdev=13576.69 00:09:44.821 lat (usec): min=2130, max=73076, avg=23634.43, stdev=13651.53 00:09:44.821 clat percentiles (usec): 00:09:44.821 | 1.00th=[ 3228], 5.00th=[ 7701], 10.00th=[13042], 20.00th=[13829], 00:09:44.821 | 30.00th=[15533], 40.00th=[17957], 50.00th=[21103], 60.00th=[22414], 00:09:44.821 | 70.00th=[22938], 80.00th=[28181], 90.00th=[41681], 95.00th=[56886], 00:09:44.821 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:09:44.821 | 99.99th=[72877] 00:09:44.821 bw ( KiB/s): min=12208, max=12288, per=17.78%, avg=12248.00, stdev=56.57, samples=2 00:09:44.821 iops : min= 3052, max= 3072, avg=3062.00, stdev=14.14, samples=2 00:09:44.821 lat (msec) : 4=0.63%, 10=5.11%, 20=54.42%, 50=35.11%, 100=4.73% 00:09:44.821 cpu : usr=1.89%, sys=3.49%, ctx=247, majf=0, minf=1 00:09:44.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:44.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.821 issued rwts: total=2678,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.821 job3: (groupid=0, jobs=1): err= 0: pid=2151217: Tue Nov 19 11:20:58 2024 00:09:44.821 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:09:44.821 slat (nsec): min=1536, max=16275k, avg=120233.54, stdev=857388.98 00:09:44.821 clat (usec): min=4081, max=42921, avg=14443.38, stdev=5180.04 00:09:44.821 lat (usec): min=4087, max=42940, avg=14563.62, stdev=5235.27 00:09:44.821 clat percentiles (usec): 00:09:44.821 | 1.00th=[ 4948], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11600], 00:09:44.821 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12387], 60.00th=[13173], 00:09:44.821 | 70.00th=[15401], 80.00th=[18220], 90.00th=[20055], 95.00th=[22676], 00:09:44.821 | 99.00th=[31589], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:44.821 | 99.99th=[42730] 00:09:44.821 write: IOPS=4652, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1007msec); 0 zone resets 00:09:44.821 slat (usec): min=2, max=7189, avg=89.57, stdev=362.13 00:09:44.821 clat (usec): min=1554, max=47648, avg=12999.43, stdev=7379.00 00:09:44.821 lat (usec): min=1568, max=47663, avg=13089.00, stdev=7429.43 00:09:44.821 clat percentiles (usec): 00:09:44.821 | 1.00th=[ 2802], 5.00th=[ 5014], 10.00th=[ 6915], 20.00th=[ 9765], 00:09:44.821 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12125], 60.00th=[12125], 00:09:44.821 | 70.00th=[12256], 80.00th=[12387], 90.00th=[21890], 95.00th=[29230], 00:09:44.821 | 99.00th=[46400], 99.50th=[46924], 99.90th=[47449], 99.95th=[47449], 00:09:44.821 | 99.99th=[47449] 00:09:44.821 bw ( KiB/s): min=16384, max=20480, per=26.76%, avg=18432.00, stdev=2896.31, samples=2 00:09:44.821 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:44.821 lat (msec) : 2=0.18%, 4=1.25%, 10=11.47%, 20=76.99%, 50=10.10% 00:09:44.821 cpu : usr=3.08%, sys=5.67%, ctx=643, majf=0, minf=1 00:09:44.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:44.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.821 issued rwts: total=4608,4685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.821 00:09:44.821 Run status group 0 (all jobs): 00:09:44.821 READ: bw=62.6MiB/s (65.6MB/s), 10.4MiB/s-22.4MiB/s (10.9MB/s-23.5MB/s), io=65.4MiB (68.6MB), run=1002-1045msec 00:09:44.821 WRITE: bw=67.3MiB/s (70.5MB/s), 11.9MiB/s-24.0MiB/s (12.5MB/s-25.1MB/s), io=70.3MiB (73.7MB), run=1002-1045msec 00:09:44.821 00:09:44.821 Disk stats (read/write): 00:09:44.821 nvme0n1: ios=4987/5120, merge=0/0, ticks=28981/30038, in_queue=59019, util=99.60% 00:09:44.821 nvme0n2: ios=3149/3584, merge=0/0, ticks=41140/63176, in_queue=104316, util=86.90% 00:09:44.821 nvme0n3: ios=2521/2560, merge=0/0, ticks=24578/32945, in_queue=57523, util=98.44% 00:09:44.821 nvme0n4: ios=3604/4047, merge=0/0, ticks=52485/53446, in_queue=105931, util=96.96% 00:09:44.821 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:44.821 [global] 00:09:44.821 thread=1 00:09:44.821 invalidate=1 00:09:44.821 rw=randwrite 00:09:44.821 time_based=1 00:09:44.821 runtime=1 00:09:44.821 ioengine=libaio 00:09:44.821 direct=1 00:09:44.821 bs=4096 00:09:44.821 iodepth=128 00:09:44.821 norandommap=0 00:09:44.821 numjobs=1 00:09:44.821 00:09:44.821 verify_dump=1 00:09:44.821 verify_backlog=512 00:09:44.821 verify_state_save=0 00:09:44.821 do_verify=1 00:09:44.821 verify=crc32c-intel 00:09:44.821 [job0] 00:09:44.821 filename=/dev/nvme0n1 00:09:44.821 [job1] 00:09:44.821 filename=/dev/nvme0n2 00:09:44.821 [job2] 00:09:44.821 filename=/dev/nvme0n3 00:09:44.821 [job3] 00:09:44.821 filename=/dev/nvme0n4 00:09:44.821 Could not set queue depth (nvme0n1) 00:09:44.821 Could not set queue depth (nvme0n2) 00:09:44.821 Could not set queue depth (nvme0n3) 00:09:44.821 Could not set queue depth (nvme0n4) 00:09:44.821 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.821 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.821 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.821 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.821 fio-3.35 00:09:44.821 Starting 4 threads 00:09:46.194 00:09:46.194 job0: (groupid=0, jobs=1): err= 0: pid=2151589: Tue Nov 19 11:20:59 2024 00:09:46.194 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:09:46.194 slat (nsec): min=1574, max=20442k, avg=182757.23, stdev=1341688.76 00:09:46.194 clat (usec): min=6122, max=51378, avg=22139.48, stdev=8186.31 00:09:46.194 lat (usec): min=6135, max=51387, avg=22322.24, stdev=8256.60 00:09:46.194 clat percentiles (usec): 00:09:46.194 | 1.00th=[ 7701], 5.00th=[14091], 10.00th=[14353], 20.00th=[15139], 00:09:46.194 | 30.00th=[17433], 40.00th=[21365], 50.00th=[21627], 60.00th=[22152], 00:09:46.194 | 70.00th=[22414], 80.00th=[24249], 90.00th=[34341], 95.00th=[40633], 00:09:46.194 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:09:46.194 | 99.99th=[51119] 00:09:46.194 write: IOPS=2933, BW=11.5MiB/s (12.0MB/s)(11.6MiB/1010msec); 0 zone resets 00:09:46.194 slat (usec): min=2, max=25522, avg=170.66, stdev=1159.72 00:09:46.194 clat (usec): min=302, max=111442, avg=24022.48, stdev=16858.37 00:09:46.194 lat (usec): min=479, max=111456, avg=24193.14, stdev=16969.09 00:09:46.194 clat percentiles (usec): 00:09:46.194 | 1.00th=[ 1893], 5.00th=[ 7439], 10.00th=[ 12256], 20.00th=[ 18220], 00:09:46.194 | 30.00th=[ 20841], 40.00th=[ 21365], 50.00th=[ 21627], 60.00th=[ 22152], 00:09:46.194 | 70.00th=[ 22414], 80.00th=[ 23200], 90.00th=[ 27395], 95.00th=[ 61604], 00:09:46.194 | 99.00th=[106431], 99.50th=[108528], 99.90th=[111674], 99.95th=[111674], 00:09:46.194 | 99.99th=[111674] 00:09:46.194 bw ( KiB/s): min=10400, max=12288, per=16.06%, avg=11344.00, stdev=1335.02, samples=2 00:09:46.194 iops : min= 2600, max= 3072, avg=2836.00, stdev=333.75, samples=2 00:09:46.194 lat (usec) : 500=0.05%, 1000=0.04% 00:09:46.194 lat (msec) : 2=0.47%, 4=1.18%, 10=3.89%, 20=22.99%, 50=67.86% 00:09:46.194 lat (msec) : 100=2.52%, 250=1.00% 00:09:46.194 cpu : usr=2.87%, sys=3.37%, ctx=280, majf=0, minf=1 00:09:46.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:46.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.194 issued rwts: total=2560,2963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.194 job1: (groupid=0, jobs=1): err= 0: pid=2151590: Tue Nov 19 11:20:59 2024 00:09:46.194 read: IOPS=5693, BW=22.2MiB/s (23.3MB/s)(22.5MiB/1010msec) 00:09:46.194 slat (nsec): min=1307, max=9852.4k, avg=89616.87, stdev=628214.19 00:09:46.194 clat (usec): min=3706, max=20021, avg=11135.89, stdev=2534.26 00:09:46.194 lat (usec): min=3717, max=20024, avg=11225.50, stdev=2578.15 00:09:46.194 clat percentiles (usec): 00:09:46.194 | 1.00th=[ 4621], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[ 9896], 00:09:46.194 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:09:46.194 | 70.00th=[10814], 80.00th=[12125], 90.00th=[15270], 95.00th=[16909], 00:09:46.194 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19792], 99.95th=[20055], 00:09:46.194 | 99.99th=[20055] 00:09:46.194 write: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec); 0 zone resets 00:09:46.194 slat (usec): min=2, max=41357, avg=72.88, stdev=648.16 00:09:46.194 clat (usec): min=1628, max=48027, avg=10346.56, stdev=5615.52 00:09:46.194 lat (usec): min=1643, max=48049, avg=10419.44, stdev=5641.63 00:09:46.194 clat percentiles (usec): 00:09:46.194 | 1.00th=[ 3458], 5.00th=[ 4948], 10.00th=[ 6783], 20.00th=[ 8586], 00:09:46.194 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10552], 00:09:46.194 | 70.00th=[10683], 80.00th=[10814], 90.00th=[10945], 95.00th=[11207], 00:09:46.194 | 99.00th=[46924], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:09:46.194 | 99.99th=[47973] 00:09:46.194 bw ( KiB/s): min=24504, max=24576, per=34.75%, avg=24540.00, stdev=50.91, samples=2 00:09:46.194 iops : min= 6126, max= 6144, avg=6135.00, stdev=12.73, samples=2 00:09:46.194 lat (msec) : 2=0.08%, 4=1.12%, 10=33.24%, 20=64.39%, 50=1.19% 00:09:46.194 cpu : usr=5.65%, sys=5.95%, ctx=638, majf=0, minf=1 00:09:46.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:46.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.194 issued rwts: total=5750,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.194 job2: (groupid=0, jobs=1): err= 0: pid=2151593: Tue Nov 19 11:20:59 2024 00:09:46.194 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:09:46.194 slat (nsec): min=1334, max=20484k, avg=173620.17, stdev=1347554.78 00:09:46.194 clat (usec): min=6108, max=49974, avg=21662.51, stdev=6251.17 00:09:46.194 lat (usec): min=6114, max=49998, avg=21836.13, stdev=6369.26 00:09:46.194 clat percentiles (usec): 00:09:46.194 | 1.00th=[ 7898], 5.00th=[13173], 10.00th=[13435], 20.00th=[15008], 00:09:46.194 | 30.00th=[18482], 40.00th=[21365], 50.00th=[21890], 60.00th=[22414], 00:09:46.194 | 70.00th=[22938], 80.00th=[27132], 90.00th=[29754], 95.00th=[32900], 00:09:46.194 | 99.00th=[38011], 99.50th=[40109], 99.90th=[41157], 99.95th=[47449], 00:09:46.194 | 99.99th=[50070] 00:09:46.194 write: IOPS=3235, BW=12.6MiB/s (13.3MB/s)(12.8MiB/1013msec); 0 zone resets 00:09:46.194 slat (usec): min=2, max=19334, avg=136.23, stdev=943.16 00:09:46.194 clat (usec): min=1454, max=41102, avg=18906.66, stdev=5300.35 00:09:46.194 lat (usec): min=1495, max=41122, avg=19042.89, stdev=5398.25 00:09:46.194 clat percentiles (usec): 00:09:46.194 | 1.00th=[ 5866], 5.00th=[ 8160], 10.00th=[11338], 20.00th=[12256], 00:09:46.194 | 30.00th=[18482], 40.00th=[20055], 50.00th=[21365], 60.00th=[21627], 00:09:46.194 | 70.00th=[22152], 80.00th=[22414], 90.00th=[23200], 95.00th=[23725], 00:09:46.194 | 99.00th=[29492], 99.50th=[29754], 99.90th=[39584], 99.95th=[40109], 00:09:46.194 | 99.99th=[41157] 00:09:46.194 bw ( KiB/s): min=12288, max=12920, per=17.85%, avg=12604.00, stdev=446.89, samples=2 00:09:46.194 iops : min= 3072, max= 3230, avg=3151.00, stdev=111.72, samples=2 00:09:46.194 lat (msec) : 2=0.02%, 4=0.09%, 10=3.69%, 20=30.83%, 50=65.37% 00:09:46.194 cpu : usr=2.37%, sys=4.25%, ctx=272, majf=0, minf=1 00:09:46.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:46.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.194 issued rwts: total=3072,3278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.194 job3: (groupid=0, jobs=1): err= 0: pid=2151594: Tue Nov 19 11:20:59 2024 00:09:46.194 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:09:46.194 slat (nsec): min=1545, max=4030.9k, avg=92926.84, stdev=499324.62 00:09:46.194 clat (usec): min=8494, max=16070, avg=11990.92, stdev=1184.08 00:09:46.194 lat (usec): min=8656, max=16558, avg=12083.84, stdev=1228.17 00:09:46.194 clat percentiles (usec): 00:09:46.194 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11207], 00:09:46.194 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:09:46.194 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13566], 95.00th=[14091], 00:09:46.194 | 99.00th=[15139], 99.50th=[15401], 99.90th=[15926], 99.95th=[16057], 00:09:46.194 | 99.99th=[16057] 00:09:46.194 write: IOPS=5470, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1005msec); 0 zone resets 00:09:46.194 slat (usec): min=2, max=3999, avg=89.17, stdev=419.91 00:09:46.194 clat (usec): min=3373, max=16943, avg=11860.86, stdev=1188.64 00:09:46.194 lat (usec): min=4191, max=16946, avg=11950.04, stdev=1230.68 00:09:46.194 clat percentiles (usec): 00:09:46.194 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11469], 00:09:46.194 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11863], 60.00th=[11994], 00:09:46.194 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12911], 95.00th=[13829], 00:09:46.194 | 99.00th=[15270], 99.50th=[15795], 99.90th=[16909], 99.95th=[16909], 00:09:46.194 | 99.99th=[16909] 00:09:46.194 bw ( KiB/s): min=20528, max=22440, per=30.42%, avg=21484.00, stdev=1351.99, samples=2 00:09:46.194 iops : min= 5132, max= 5610, avg=5371.00, stdev=338.00, samples=2 00:09:46.194 lat (msec) : 4=0.01%, 10=6.01%, 20=93.98% 00:09:46.194 cpu : usr=3.59%, sys=7.77%, ctx=565, majf=0, minf=1 00:09:46.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:46.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.194 issued rwts: total=5120,5498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.195 00:09:46.195 Run status group 0 (all jobs): 00:09:46.195 READ: bw=63.6MiB/s (66.7MB/s), 9.90MiB/s-22.2MiB/s (10.4MB/s-23.3MB/s), io=64.5MiB (67.6MB), run=1005-1013msec 00:09:46.195 WRITE: bw=69.0MiB/s (72.3MB/s), 11.5MiB/s-23.8MiB/s (12.0MB/s-24.9MB/s), io=69.9MiB (73.2MB), run=1005-1013msec 00:09:46.195 00:09:46.195 Disk stats (read/write): 00:09:46.195 nvme0n1: ios=2075/2559, merge=0/0, ticks=45062/52252, in_queue=97314, util=98.40% 00:09:46.195 nvme0n2: ios=4630/4831, merge=0/0, ticks=50047/44279, in_queue=94326, util=97.12% 00:09:46.195 nvme0n3: ios=2048/2559, merge=0/0, ticks=46167/51193, in_queue=97360, util=87.51% 00:09:46.195 nvme0n4: ios=4117/4447, merge=0/0, ticks=16229/16214, in_queue=32443, util=99.01% 00:09:46.195 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:46.195 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2151825 00:09:46.195 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:46.195 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:46.195 [global] 00:09:46.195 thread=1 00:09:46.195 invalidate=1 00:09:46.195 rw=read 00:09:46.195 time_based=1 00:09:46.195 runtime=10 00:09:46.195 ioengine=libaio 00:09:46.195 direct=1 00:09:46.195 bs=4096 00:09:46.195 iodepth=1 00:09:46.195 norandommap=1 00:09:46.195 numjobs=1 00:09:46.195 00:09:46.195 [job0] 00:09:46.195 filename=/dev/nvme0n1 00:09:46.195 [job1] 00:09:46.195 filename=/dev/nvme0n2 00:09:46.195 [job2] 00:09:46.195 filename=/dev/nvme0n3 00:09:46.195 [job3] 00:09:46.195 filename=/dev/nvme0n4 00:09:46.195 Could not set queue depth (nvme0n1) 00:09:46.195 Could not set queue depth (nvme0n2) 00:09:46.195 Could not set queue depth (nvme0n3) 00:09:46.195 Could not set queue depth (nvme0n4) 00:09:46.452 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.452 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.452 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.452 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.452 fio-3.35 00:09:46.452 Starting 4 threads 00:09:49.733 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:49.733 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=35278848, buflen=4096 00:09:49.733 fio: pid=2151987, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.733 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:49.733 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=335872, buflen=4096 00:09:49.733 fio: pid=2151986, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.733 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.733 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:49.733 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.733 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:49.733 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=12292096, buflen=4096 00:09:49.733 fio: pid=2151983, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.991 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=3530752, buflen=4096 00:09:49.991 fio: pid=2151984, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:49.991 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.991 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:49.991 00:09:49.991 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2151983: Tue Nov 19 11:21:03 2024 00:09:49.991 read: IOPS=955, BW=3820KiB/s (3912kB/s)(11.7MiB/3142msec) 00:09:49.991 slat (usec): min=6, max=12640, avg=11.90, stdev=230.57 00:09:49.991 clat (usec): min=142, max=42006, avg=1026.28, stdev=5840.43 00:09:49.991 lat (usec): min=149, max=53840, avg=1038.17, stdev=5875.18 00:09:49.991 clat percentiles (usec): 00:09:49.991 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:09:49.991 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:09:49.991 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 215], 95.00th=[ 235], 00:09:49.991 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:49.991 | 99.99th=[42206] 00:09:49.991 bw ( KiB/s): min= 93, max=15984, per=26.53%, avg=3996.83, stdev=6590.91, samples=6 00:09:49.991 iops : min= 23, max= 3996, avg=999.17, stdev=1647.76, samples=6 00:09:49.991 lat (usec) : 250=96.97%, 500=0.83%, 750=0.03% 00:09:49.991 lat (msec) : 2=0.07%, 50=2.07% 00:09:49.991 cpu : usr=0.32%, sys=0.83%, ctx=3003, majf=0, minf=1 00:09:49.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.991 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.991 issued rwts: total=3002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.991 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2151984: Tue Nov 19 11:21:03 2024 00:09:49.991 read: IOPS=258, BW=1034KiB/s (1059kB/s)(3448KiB/3335msec) 00:09:49.991 slat (usec): min=6, max=6761, avg=24.92, stdev=316.48 00:09:49.991 clat (usec): min=143, max=44811, avg=3841.53, stdev=11669.86 00:09:49.992 lat (usec): min=167, max=47924, avg=3859.05, stdev=11701.15 00:09:49.992 clat percentiles (usec): 00:09:49.992 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:09:49.992 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 217], 00:09:49.992 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 285], 95.00th=[41157], 00:09:49.992 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:09:49.992 | 99.99th=[44827] 00:09:49.992 bw ( KiB/s): min= 93, max= 3304, per=7.56%, avg=1138.17, stdev=1595.56, samples=6 00:09:49.992 iops : min= 23, max= 826, avg=284.50, stdev=398.92, samples=6 00:09:49.992 lat (usec) : 250=81.11%, 500=9.85% 00:09:49.992 lat (msec) : 10=0.12%, 50=8.81% 00:09:49.992 cpu : usr=0.12%, sys=0.45%, ctx=866, majf=0, minf=2 00:09:49.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.992 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.992 issued rwts: total=863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.992 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2151986: Tue Nov 19 11:21:03 2024 00:09:49.992 read: IOPS=28, BW=113KiB/s (115kB/s)(328KiB/2912msec) 00:09:49.992 slat (nsec): min=7134, max=42506, avg=21588.24, stdev=5612.66 00:09:49.992 clat (usec): min=209, max=42046, avg=35221.54, stdev=14553.24 00:09:49.992 lat (usec): min=217, max=42069, avg=35243.12, stdev=14556.33 00:09:49.992 clat percentiles (usec): 00:09:49.992 | 1.00th=[ 210], 5.00th=[ 243], 10.00th=[ 338], 20.00th=[40633], 00:09:49.992 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:49.992 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:09:49.992 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:49.992 | 99.99th=[42206] 00:09:49.992 bw ( KiB/s): min= 96, max= 168, per=0.76%, avg=115.20, stdev=31.29, samples=5 00:09:49.992 iops : min= 24, max= 42, avg=28.80, stdev= 7.82, samples=5 00:09:49.992 lat (usec) : 250=8.43%, 500=4.82%, 750=1.20% 00:09:49.992 lat (msec) : 50=84.34% 00:09:49.992 cpu : usr=0.00%, sys=0.10%, ctx=83, majf=0, minf=2 00:09:49.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.992 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.992 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.992 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2151987: Tue Nov 19 11:21:03 2024 00:09:49.992 read: IOPS=3168, BW=12.4MiB/s (13.0MB/s)(33.6MiB/2719msec) 00:09:49.992 slat (nsec): min=6493, max=62599, avg=7377.13, stdev=1371.82 00:09:49.992 clat (usec): min=153, max=42235, avg=304.26, stdev=2177.90 00:09:49.992 lat (usec): min=165, max=42243, avg=311.64, stdev=2178.19 00:09:49.992 clat percentiles (usec): 00:09:49.992 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:09:49.992 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:09:49.992 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 219], 00:09:49.992 | 99.00th=[ 253], 99.50th=[ 351], 99.90th=[41157], 99.95th=[41157], 00:09:49.992 | 99.99th=[42206] 00:09:49.992 bw ( KiB/s): min= 1760, max=20904, per=81.02%, avg=12204.80, stdev=9270.70, samples=5 00:09:49.992 iops : min= 440, max= 5226, avg=3051.20, stdev=2317.68, samples=5 00:09:49.992 lat (usec) : 250=98.83%, 500=0.86%, 750=0.01% 00:09:49.992 lat (msec) : 50=0.29% 00:09:49.992 cpu : usr=0.66%, sys=2.94%, ctx=8615, majf=0, minf=2 00:09:49.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.992 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.992 issued rwts: total=8614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.992 00:09:49.992 Run status group 0 (all jobs): 00:09:49.992 READ: bw=14.7MiB/s (15.4MB/s), 113KiB/s-12.4MiB/s (115kB/s-13.0MB/s), io=49.1MiB (51.4MB), run=2719-3335msec 00:09:49.992 00:09:49.992 Disk stats (read/write): 00:09:49.992 nvme0n1: ios=3000/0, merge=0/0, ticks=3022/0, in_queue=3022, util=95.38% 00:09:49.992 nvme0n2: ios=856/0, merge=0/0, ticks=3058/0, in_queue=3058, util=96.10% 00:09:49.992 nvme0n3: ios=81/0, merge=0/0, ticks=2850/0, in_queue=2850, util=96.55% 00:09:49.992 nvme0n4: ios=8157/0, merge=0/0, ticks=2496/0, in_queue=2496, util=96.45% 00:09:50.250 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.250 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:50.508 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.508 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:50.508 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.508 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:50.766 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.766 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:51.025 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:51.025 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2151825 00:09:51.025 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:51.025 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.025 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.025 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:51.025 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:51.025 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.025 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:51.025 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.283 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:51.283 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:51.283 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:51.283 nvmf hotplug test: fio failed as expected 00:09:51.283 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.283 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:51.283 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:51.283 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:51.283 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:51.283 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:51.283 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.283 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:51.283 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.283 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:51.283 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.283 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.283 rmmod nvme_tcp 00:09:51.283 rmmod nvme_fabrics 00:09:51.542 rmmod nvme_keyring 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2148981 ']' 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2148981 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2148981 ']' 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2148981 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2148981 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2148981' 00:09:51.542 killing process with pid 2148981 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2148981 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2148981 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.542 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.800 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.800 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.800 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.800 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.800 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.707 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.707 00:09:53.707 real 0m26.999s 00:09:53.707 user 1m47.057s 00:09:53.707 sys 0m8.310s 00:09:53.707 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.707 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.707 ************************************ 00:09:53.707 END TEST nvmf_fio_target 00:09:53.707 ************************************ 00:09:53.707 11:21:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:53.707 11:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.707 11:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.707 11:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.707 ************************************ 00:09:53.707 START TEST nvmf_bdevio 00:09:53.707 ************************************ 00:09:53.707 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:53.968 * Looking for test storage... 00:09:53.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.968 --rc genhtml_branch_coverage=1 00:09:53.968 --rc genhtml_function_coverage=1 00:09:53.968 --rc genhtml_legend=1 00:09:53.968 --rc geninfo_all_blocks=1 00:09:53.968 --rc geninfo_unexecuted_blocks=1 00:09:53.968 00:09:53.968 ' 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.968 --rc genhtml_branch_coverage=1 00:09:53.968 --rc genhtml_function_coverage=1 00:09:53.968 --rc genhtml_legend=1 00:09:53.968 --rc geninfo_all_blocks=1 00:09:53.968 --rc geninfo_unexecuted_blocks=1 00:09:53.968 00:09:53.968 ' 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.968 --rc genhtml_branch_coverage=1 00:09:53.968 --rc genhtml_function_coverage=1 00:09:53.968 --rc genhtml_legend=1 00:09:53.968 --rc geninfo_all_blocks=1 00:09:53.968 --rc geninfo_unexecuted_blocks=1 00:09:53.968 00:09:53.968 ' 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.968 --rc genhtml_branch_coverage=1 00:09:53.968 --rc genhtml_function_coverage=1 00:09:53.968 --rc genhtml_legend=1 00:09:53.968 --rc geninfo_all_blocks=1 00:09:53.968 --rc geninfo_unexecuted_blocks=1 00:09:53.968 00:09:53.968 ' 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.968 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.969 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.542 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:00.543 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:00.543 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:00.543 Found net devices under 0000:86:00.0: cvl_0_0 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:00.543 Found net devices under 0000:86:00.1: cvl_0_1 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:00.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:10:00.543 00:10:00.543 --- 10.0.0.2 ping statistics --- 00:10:00.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.543 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:10:00.543 00:10:00.543 --- 10.0.0.1 ping statistics --- 00:10:00.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.543 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:00.543 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2156952 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2156952 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2156952 ']' 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 [2024-11-19 11:21:13.720258] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:00.544 [2024-11-19 11:21:13.720303] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.544 [2024-11-19 11:21:13.799638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.544 [2024-11-19 11:21:13.840441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.544 [2024-11-19 11:21:13.840483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.544 [2024-11-19 11:21:13.840490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.544 [2024-11-19 11:21:13.840497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.544 [2024-11-19 11:21:13.840502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.544 [2024-11-19 11:21:13.842133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:00.544 [2024-11-19 11:21:13.842245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:00.544 [2024-11-19 11:21:13.842356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.544 [2024-11-19 11:21:13.842357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 [2024-11-19 11:21:13.986541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.544 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 Malloc0 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 [2024-11-19 11:21:14.046635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:00.544 { 00:10:00.544 "params": { 00:10:00.544 "name": "Nvme$subsystem", 00:10:00.544 "trtype": "$TEST_TRANSPORT", 00:10:00.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.544 "adrfam": "ipv4", 00:10:00.544 "trsvcid": "$NVMF_PORT", 00:10:00.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.544 "hdgst": ${hdgst:-false}, 00:10:00.544 "ddgst": ${ddgst:-false} 00:10:00.544 }, 00:10:00.544 "method": "bdev_nvme_attach_controller" 00:10:00.544 } 00:10:00.544 EOF 00:10:00.544 )") 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:00.544 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:00.544 "params": { 00:10:00.544 "name": "Nvme1", 00:10:00.544 "trtype": "tcp", 00:10:00.544 "traddr": "10.0.0.2", 00:10:00.544 "adrfam": "ipv4", 00:10:00.544 "trsvcid": "4420", 00:10:00.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.544 "hdgst": false, 00:10:00.544 "ddgst": false 00:10:00.544 }, 00:10:00.544 "method": "bdev_nvme_attach_controller" 00:10:00.544 }' 00:10:00.544 [2024-11-19 11:21:14.097695] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:00.544 [2024-11-19 11:21:14.097738] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2156982 ] 00:10:00.544 [2024-11-19 11:21:14.174472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.544 [2024-11-19 11:21:14.218479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.544 [2024-11-19 11:21:14.218585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.544 [2024-11-19 11:21:14.218585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.802 I/O targets: 00:10:00.802 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:00.802 00:10:00.802 00:10:00.802 CUnit - A unit testing framework for C - Version 2.1-3 00:10:00.802 http://cunit.sourceforge.net/ 00:10:00.802 00:10:00.802 00:10:00.802 Suite: bdevio tests on: Nvme1n1 00:10:00.802 Test: blockdev write read block ...passed 00:10:00.802 Test: blockdev write zeroes read block ...passed 00:10:00.803 Test: blockdev write zeroes read no split ...passed 00:10:00.803 Test: blockdev write zeroes read split ...passed 00:10:00.803 Test: blockdev write zeroes read split partial ...passed 00:10:00.803 Test: blockdev reset ...[2024-11-19 11:21:14.532032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:00.803 [2024-11-19 11:21:14.532099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f59340 (9): Bad file descriptor 00:10:00.803 [2024-11-19 11:21:14.544185] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:00.803 passed 00:10:01.061 Test: blockdev write read 8 blocks ...passed 00:10:01.061 Test: blockdev write read size > 128k ...passed 00:10:01.061 Test: blockdev write read invalid size ...passed 00:10:01.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.061 Test: blockdev write read max offset ...passed 00:10:01.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.061 Test: blockdev writev readv 8 blocks ...passed 00:10:01.061 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.061 Test: blockdev writev readv block ...passed 00:10:01.061 Test: blockdev writev readv size > 128k ...passed 00:10:01.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.061 Test: blockdev comparev and writev ...[2024-11-19 11:21:14.795702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.061 [2024-11-19 11:21:14.795730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:01.061 [2024-11-19 11:21:14.795744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.061 [2024-11-19 11:21:14.795753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:01.061 [2024-11-19 11:21:14.796003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.061 [2024-11-19 11:21:14.796014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:01.061 [2024-11-19 11:21:14.796025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.061 [2024-11-19 11:21:14.796033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:01.061 [2024-11-19 11:21:14.796277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.061 [2024-11-19 11:21:14.796287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:01.061 [2024-11-19 11:21:14.796299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.061 [2024-11-19 11:21:14.796306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:01.061 [2024-11-19 11:21:14.796531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.061 [2024-11-19 11:21:14.796542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:01.061 [2024-11-19 11:21:14.796553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.061 [2024-11-19 11:21:14.796561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:01.061 passed 00:10:01.319 Test: blockdev nvme passthru rw ...passed 00:10:01.319 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:21:14.878343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.319 [2024-11-19 11:21:14.878361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:01.319 [2024-11-19 11:21:14.878467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.319 [2024-11-19 11:21:14.878477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:01.319 [2024-11-19 11:21:14.878583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.319 [2024-11-19 11:21:14.878596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:01.319 [2024-11-19 11:21:14.878704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.319 [2024-11-19 11:21:14.878713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:01.319 passed 00:10:01.319 Test: blockdev nvme admin passthru ...passed 00:10:01.319 Test: blockdev copy ...passed 00:10:01.319 00:10:01.319 Run Summary: Type Total Ran Passed Failed Inactive 00:10:01.319 suites 1 1 n/a 0 0 00:10:01.319 tests 23 23 23 0 0 00:10:01.319 asserts 152 152 152 0 n/a 00:10:01.319 00:10:01.319 Elapsed time = 1.040 seconds 00:10:01.319 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.319 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.319 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.320 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.320 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:01.320 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:01.320 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:01.320 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:01.320 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.320 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:01.320 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.320 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.320 rmmod nvme_tcp 00:10:01.578 rmmod nvme_fabrics 00:10:01.578 rmmod nvme_keyring 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2156952 ']' 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2156952 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2156952 ']' 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2156952 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2156952 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2156952' 00:10:01.578 killing process with pid 2156952 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2156952 00:10:01.578 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2156952 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.837 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.743 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:03.743 00:10:03.743 real 0m9.975s 00:10:03.743 user 0m9.811s 00:10:03.743 sys 0m4.993s 00:10:03.743 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.743 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.743 ************************************ 00:10:03.743 END TEST nvmf_bdevio 00:10:03.743 ************************************ 00:10:03.743 11:21:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:03.743 00:10:03.743 real 4m36.636s 00:10:03.743 user 10m29.512s 00:10:03.743 sys 1m39.404s 00:10:03.743 11:21:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.743 11:21:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.743 ************************************ 00:10:03.743 END TEST nvmf_target_core 00:10:03.743 ************************************ 00:10:03.743 11:21:17 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:03.743 11:21:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.743 11:21:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.743 11:21:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.003 ************************************ 00:10:04.003 START TEST nvmf_target_extra 00:10:04.003 ************************************ 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:04.003 * Looking for test storage... 00:10:04.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.003 --rc genhtml_branch_coverage=1 00:10:04.003 --rc genhtml_function_coverage=1 00:10:04.003 --rc genhtml_legend=1 00:10:04.003 --rc geninfo_all_blocks=1 00:10:04.003 --rc geninfo_unexecuted_blocks=1 00:10:04.003 00:10:04.003 ' 00:10:04.003 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.003 --rc genhtml_branch_coverage=1 00:10:04.003 --rc genhtml_function_coverage=1 00:10:04.003 --rc genhtml_legend=1 00:10:04.003 --rc geninfo_all_blocks=1 00:10:04.004 --rc geninfo_unexecuted_blocks=1 00:10:04.004 00:10:04.004 ' 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.004 --rc genhtml_branch_coverage=1 00:10:04.004 --rc genhtml_function_coverage=1 00:10:04.004 --rc genhtml_legend=1 00:10:04.004 --rc geninfo_all_blocks=1 00:10:04.004 --rc geninfo_unexecuted_blocks=1 00:10:04.004 00:10:04.004 ' 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.004 --rc genhtml_branch_coverage=1 00:10:04.004 --rc genhtml_function_coverage=1 00:10:04.004 --rc genhtml_legend=1 00:10:04.004 --rc geninfo_all_blocks=1 00:10:04.004 --rc geninfo_unexecuted_blocks=1 00:10:04.004 00:10:04.004 ' 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.004 11:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.264 ************************************ 00:10:04.264 START TEST nvmf_example 00:10:04.264 ************************************ 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:04.264 * Looking for test storage... 00:10:04.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.264 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.265 --rc genhtml_branch_coverage=1 00:10:04.265 --rc genhtml_function_coverage=1 00:10:04.265 --rc genhtml_legend=1 00:10:04.265 --rc geninfo_all_blocks=1 00:10:04.265 --rc geninfo_unexecuted_blocks=1 00:10:04.265 00:10:04.265 ' 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.265 --rc genhtml_branch_coverage=1 00:10:04.265 --rc genhtml_function_coverage=1 00:10:04.265 --rc genhtml_legend=1 00:10:04.265 --rc geninfo_all_blocks=1 00:10:04.265 --rc geninfo_unexecuted_blocks=1 00:10:04.265 00:10:04.265 ' 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.265 --rc genhtml_branch_coverage=1 00:10:04.265 --rc genhtml_function_coverage=1 00:10:04.265 --rc genhtml_legend=1 00:10:04.265 --rc geninfo_all_blocks=1 00:10:04.265 --rc geninfo_unexecuted_blocks=1 00:10:04.265 00:10:04.265 ' 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.265 --rc genhtml_branch_coverage=1 00:10:04.265 --rc genhtml_function_coverage=1 00:10:04.265 --rc genhtml_legend=1 00:10:04.265 --rc geninfo_all_blocks=1 00:10:04.265 --rc geninfo_unexecuted_blocks=1 00:10:04.265 00:10:04.265 ' 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.265 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:04.265 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.266 11:21:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:10.842 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:10.842 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:10.842 Found net devices under 0000:86:00.0: cvl_0_0 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:10.842 Found net devices under 0000:86:00.1: cvl_0_1 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:10.842 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:10.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:10:10.843 00:10:10.843 --- 10.0.0.2 ping statistics --- 00:10:10.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.843 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:10:10.843 00:10:10.843 --- 10.0.0.1 ping statistics --- 00:10:10.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.843 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2160806 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2160806 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2160806 ']' 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.843 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.409 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.409 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.409 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:11.409 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:23.605 Initializing NVMe Controllers 00:10:23.605 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:23.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:23.605 Initialization complete. Launching workers. 00:10:23.605 ======================================================== 00:10:23.605 Latency(us) 00:10:23.605 Device Information : IOPS MiB/s Average min max 00:10:23.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17955.51 70.14 3565.09 543.31 15570.84 00:10:23.605 ======================================================== 00:10:23.605 Total : 17955.51 70.14 3565.09 543.31 15570.84 00:10:23.605 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:23.605 rmmod nvme_tcp 00:10:23.605 rmmod nvme_fabrics 00:10:23.605 rmmod nvme_keyring 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2160806 ']' 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2160806 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2160806 ']' 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2160806 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:23.605 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2160806 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2160806' 00:10:23.606 killing process with pid 2160806 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2160806 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2160806 00:10:23.606 nvmf threads initialize successfully 00:10:23.606 bdev subsystem init successfully 00:10:23.606 created a nvmf target service 00:10:23.606 create targets's poll groups done 00:10:23.606 all subsystems of target started 00:10:23.606 nvmf target is running 00:10:23.606 all subsystems of target stopped 00:10:23.606 destroy targets's poll groups done 00:10:23.606 destroyed the nvmf target service 00:10:23.606 bdev subsystem finish successfully 00:10:23.606 nvmf threads destroy successfully 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.606 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.175 00:10:24.175 real 0m19.906s 00:10:24.175 user 0m46.492s 00:10:24.175 sys 0m6.017s 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.175 ************************************ 00:10:24.175 END TEST nvmf_example 00:10:24.175 ************************************ 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:24.175 ************************************ 00:10:24.175 START TEST nvmf_filesystem 00:10:24.175 ************************************ 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:24.175 * Looking for test storage... 00:10:24.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.175 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.440 --rc genhtml_branch_coverage=1 00:10:24.440 --rc genhtml_function_coverage=1 00:10:24.440 --rc genhtml_legend=1 00:10:24.440 --rc geninfo_all_blocks=1 00:10:24.440 --rc geninfo_unexecuted_blocks=1 00:10:24.440 00:10:24.440 ' 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.440 --rc genhtml_branch_coverage=1 00:10:24.440 --rc genhtml_function_coverage=1 00:10:24.440 --rc genhtml_legend=1 00:10:24.440 --rc geninfo_all_blocks=1 00:10:24.440 --rc geninfo_unexecuted_blocks=1 00:10:24.440 00:10:24.440 ' 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.440 --rc genhtml_branch_coverage=1 00:10:24.440 --rc genhtml_function_coverage=1 00:10:24.440 --rc genhtml_legend=1 00:10:24.440 --rc geninfo_all_blocks=1 00:10:24.440 --rc geninfo_unexecuted_blocks=1 00:10:24.440 00:10:24.440 ' 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.440 --rc genhtml_branch_coverage=1 00:10:24.440 --rc genhtml_function_coverage=1 00:10:24.440 --rc genhtml_legend=1 00:10:24.440 --rc geninfo_all_blocks=1 00:10:24.440 --rc geninfo_unexecuted_blocks=1 00:10:24.440 00:10:24.440 ' 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:24.440 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:24.441 #define SPDK_CONFIG_H 00:10:24.441 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:24.441 #define SPDK_CONFIG_APPS 1 00:10:24.441 #define SPDK_CONFIG_ARCH native 00:10:24.441 #undef SPDK_CONFIG_ASAN 00:10:24.441 #undef SPDK_CONFIG_AVAHI 00:10:24.441 #undef SPDK_CONFIG_CET 00:10:24.441 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:24.441 #define SPDK_CONFIG_COVERAGE 1 00:10:24.441 #define SPDK_CONFIG_CROSS_PREFIX 00:10:24.441 #undef SPDK_CONFIG_CRYPTO 00:10:24.441 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:24.441 #undef SPDK_CONFIG_CUSTOMOCF 00:10:24.441 #undef SPDK_CONFIG_DAOS 00:10:24.441 #define SPDK_CONFIG_DAOS_DIR 00:10:24.441 #define SPDK_CONFIG_DEBUG 1 00:10:24.441 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:24.441 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:24.441 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:24.441 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:24.441 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:24.441 #undef SPDK_CONFIG_DPDK_UADK 00:10:24.441 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:24.441 #define SPDK_CONFIG_EXAMPLES 1 00:10:24.441 #undef SPDK_CONFIG_FC 00:10:24.441 #define SPDK_CONFIG_FC_PATH 00:10:24.441 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:24.441 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:24.441 #define SPDK_CONFIG_FSDEV 1 00:10:24.441 #undef SPDK_CONFIG_FUSE 00:10:24.441 #undef SPDK_CONFIG_FUZZER 00:10:24.441 #define SPDK_CONFIG_FUZZER_LIB 00:10:24.441 #undef SPDK_CONFIG_GOLANG 00:10:24.441 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:24.441 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:24.441 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:24.441 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:24.441 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:24.441 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:24.441 #undef SPDK_CONFIG_HAVE_LZ4 00:10:24.441 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:24.441 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:24.441 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:24.441 #define SPDK_CONFIG_IDXD 1 00:10:24.441 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:24.441 #undef SPDK_CONFIG_IPSEC_MB 00:10:24.441 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:24.441 #define SPDK_CONFIG_ISAL 1 00:10:24.441 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:24.441 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:24.441 #define SPDK_CONFIG_LIBDIR 00:10:24.441 #undef SPDK_CONFIG_LTO 00:10:24.441 #define SPDK_CONFIG_MAX_LCORES 128 00:10:24.441 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:24.441 #define SPDK_CONFIG_NVME_CUSE 1 00:10:24.441 #undef SPDK_CONFIG_OCF 00:10:24.441 #define SPDK_CONFIG_OCF_PATH 00:10:24.441 #define SPDK_CONFIG_OPENSSL_PATH 00:10:24.441 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:24.441 #define SPDK_CONFIG_PGO_DIR 00:10:24.441 #undef SPDK_CONFIG_PGO_USE 00:10:24.441 #define SPDK_CONFIG_PREFIX /usr/local 00:10:24.441 #undef SPDK_CONFIG_RAID5F 00:10:24.441 #undef SPDK_CONFIG_RBD 00:10:24.441 #define SPDK_CONFIG_RDMA 1 00:10:24.441 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:24.441 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:24.441 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:24.441 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:24.441 #define SPDK_CONFIG_SHARED 1 00:10:24.441 #undef SPDK_CONFIG_SMA 00:10:24.441 #define SPDK_CONFIG_TESTS 1 00:10:24.441 #undef SPDK_CONFIG_TSAN 00:10:24.441 #define SPDK_CONFIG_UBLK 1 00:10:24.441 #define SPDK_CONFIG_UBSAN 1 00:10:24.441 #undef SPDK_CONFIG_UNIT_TESTS 00:10:24.441 #undef SPDK_CONFIG_URING 00:10:24.441 #define SPDK_CONFIG_URING_PATH 00:10:24.441 #undef SPDK_CONFIG_URING_ZNS 00:10:24.441 #undef SPDK_CONFIG_USDT 00:10:24.441 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:24.441 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:24.441 #define SPDK_CONFIG_VFIO_USER 1 00:10:24.441 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:24.441 #define SPDK_CONFIG_VHOST 1 00:10:24.441 #define SPDK_CONFIG_VIRTIO 1 00:10:24.441 #undef SPDK_CONFIG_VTUNE 00:10:24.441 #define SPDK_CONFIG_VTUNE_DIR 00:10:24.441 #define SPDK_CONFIG_WERROR 1 00:10:24.441 #define SPDK_CONFIG_WPDK_DIR 00:10:24.441 #undef SPDK_CONFIG_XNVME 00:10:24.441 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.441 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.442 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.442 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.442 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.442 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.442 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.442 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:24.442 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.442 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:24.442 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:24.443 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2163207 ]] 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2163207 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.W91cCC 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:24.444 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.W91cCC/tests/target /tmp/spdk.W91cCC 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=188994285568 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6969675776 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981616128 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=364544 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:24.445 * Looking for test storage... 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=188994285568 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9184268288 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.445 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:24.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.446 --rc genhtml_branch_coverage=1 00:10:24.446 --rc genhtml_function_coverage=1 00:10:24.446 --rc genhtml_legend=1 00:10:24.446 --rc geninfo_all_blocks=1 00:10:24.446 --rc geninfo_unexecuted_blocks=1 00:10:24.446 00:10:24.446 ' 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:24.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.446 --rc genhtml_branch_coverage=1 00:10:24.446 --rc genhtml_function_coverage=1 00:10:24.446 --rc genhtml_legend=1 00:10:24.446 --rc geninfo_all_blocks=1 00:10:24.446 --rc geninfo_unexecuted_blocks=1 00:10:24.446 00:10:24.446 ' 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:24.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.446 --rc genhtml_branch_coverage=1 00:10:24.446 --rc genhtml_function_coverage=1 00:10:24.446 --rc genhtml_legend=1 00:10:24.446 --rc geninfo_all_blocks=1 00:10:24.446 --rc geninfo_unexecuted_blocks=1 00:10:24.446 00:10:24.446 ' 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:24.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.446 --rc genhtml_branch_coverage=1 00:10:24.446 --rc genhtml_function_coverage=1 00:10:24.446 --rc genhtml_legend=1 00:10:24.446 --rc geninfo_all_blocks=1 00:10:24.446 --rc geninfo_unexecuted_blocks=1 00:10:24.446 00:10:24.446 ' 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.446 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.706 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.707 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:31.282 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:31.282 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.282 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:31.283 Found net devices under 0000:86:00.0: cvl_0_0 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:31.283 Found net devices under 0000:86:00.1: cvl_0_1 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.283 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:10:31.283 00:10:31.283 --- 10.0.0.2 ping statistics --- 00:10:31.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.283 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:10:31.283 00:10:31.283 --- 10.0.0.1 ping statistics --- 00:10:31.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.283 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.283 ************************************ 00:10:31.283 START TEST nvmf_filesystem_no_in_capsule 00:10:31.283 ************************************ 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2166300 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2166300 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2166300 ']' 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.283 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.284 [2024-11-19 11:21:44.353227] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:31.284 [2024-11-19 11:21:44.353273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.284 [2024-11-19 11:21:44.433551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.284 [2024-11-19 11:21:44.476077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.284 [2024-11-19 11:21:44.476114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.284 [2024-11-19 11:21:44.476121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.284 [2024-11-19 11:21:44.476131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.284 [2024-11-19 11:21:44.476136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.284 [2024-11-19 11:21:44.477733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.284 [2024-11-19 11:21:44.477754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.284 [2024-11-19 11:21:44.477855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.284 [2024-11-19 11:21:44.477855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.284 [2024-11-19 11:21:44.615731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.284 Malloc1 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.284 [2024-11-19 11:21:44.762302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:31.284 { 00:10:31.284 "name": "Malloc1", 00:10:31.284 "aliases": [ 00:10:31.284 "67e466e2-fa33-4809-8c17-9e4196be2c6f" 00:10:31.284 ], 00:10:31.284 "product_name": "Malloc disk", 00:10:31.284 "block_size": 512, 00:10:31.284 "num_blocks": 1048576, 00:10:31.284 "uuid": "67e466e2-fa33-4809-8c17-9e4196be2c6f", 00:10:31.284 "assigned_rate_limits": { 00:10:31.284 "rw_ios_per_sec": 0, 00:10:31.284 "rw_mbytes_per_sec": 0, 00:10:31.284 "r_mbytes_per_sec": 0, 00:10:31.284 "w_mbytes_per_sec": 0 00:10:31.284 }, 00:10:31.284 "claimed": true, 00:10:31.284 "claim_type": "exclusive_write", 00:10:31.284 "zoned": false, 00:10:31.284 "supported_io_types": { 00:10:31.284 "read": true, 00:10:31.284 "write": true, 00:10:31.284 "unmap": true, 00:10:31.284 "flush": true, 00:10:31.284 "reset": true, 00:10:31.284 "nvme_admin": false, 00:10:31.284 "nvme_io": false, 00:10:31.284 "nvme_io_md": false, 00:10:31.284 "write_zeroes": true, 00:10:31.284 "zcopy": true, 00:10:31.284 "get_zone_info": false, 00:10:31.284 "zone_management": false, 00:10:31.284 "zone_append": false, 00:10:31.284 "compare": false, 00:10:31.284 "compare_and_write": false, 00:10:31.284 "abort": true, 00:10:31.284 "seek_hole": false, 00:10:31.284 "seek_data": false, 00:10:31.284 "copy": true, 00:10:31.284 "nvme_iov_md": false 00:10:31.284 }, 00:10:31.284 "memory_domains": [ 00:10:31.284 { 00:10:31.284 "dma_device_id": "system", 00:10:31.284 "dma_device_type": 1 00:10:31.284 }, 00:10:31.284 { 00:10:31.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.284 "dma_device_type": 2 00:10:31.284 } 00:10:31.284 ], 00:10:31.284 "driver_specific": {} 00:10:31.284 } 00:10:31.284 ]' 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:31.284 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:31.285 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:31.285 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:31.285 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:31.285 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.657 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:32.657 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:32.657 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.657 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:32.657 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:34.554 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:34.555 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:34.555 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:34.555 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:34.812 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:35.743 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.674 ************************************ 00:10:36.674 START TEST filesystem_ext4 00:10:36.674 ************************************ 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:36.674 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:36.674 mke2fs 1.47.0 (5-Feb-2023) 00:10:36.674 Discarding device blocks: 0/522240 done 00:10:36.674 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:36.674 Filesystem UUID: 05796951-e2d4-44d8-aaae-97b90822257b 00:10:36.674 Superblock backups stored on blocks: 00:10:36.674 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:36.674 00:10:36.675 Allocating group tables: 0/64 done 00:10:36.675 Writing inode tables: 0/64 done 00:10:36.932 Creating journal (8192 blocks): done 00:10:37.496 Writing superblocks and filesystem accounting information: 0/64 done 00:10:37.496 00:10:37.496 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:37.496 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2166300 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:44.047 00:10:44.047 real 0m7.243s 00:10:44.047 user 0m0.030s 00:10:44.047 sys 0m0.070s 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:44.047 ************************************ 00:10:44.047 END TEST filesystem_ext4 00:10:44.047 ************************************ 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.047 ************************************ 00:10:44.047 START TEST filesystem_btrfs 00:10:44.047 ************************************ 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:44.047 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:44.047 btrfs-progs v6.8.1 00:10:44.047 See https://btrfs.readthedocs.io for more information. 00:10:44.047 00:10:44.047 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:44.047 NOTE: several default settings have changed in version 5.15, please make sure 00:10:44.047 this does not affect your deployments: 00:10:44.047 - DUP for metadata (-m dup) 00:10:44.047 - enabled no-holes (-O no-holes) 00:10:44.047 - enabled free-space-tree (-R free-space-tree) 00:10:44.047 00:10:44.047 Label: (null) 00:10:44.047 UUID: 2c71df11-a8b7-44cb-922c-7e2cf24bb16c 00:10:44.047 Node size: 16384 00:10:44.047 Sector size: 4096 (CPU page size: 4096) 00:10:44.047 Filesystem size: 510.00MiB 00:10:44.047 Block group profiles: 00:10:44.048 Data: single 8.00MiB 00:10:44.048 Metadata: DUP 32.00MiB 00:10:44.048 System: DUP 8.00MiB 00:10:44.048 SSD detected: yes 00:10:44.048 Zoned device: no 00:10:44.048 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:44.048 Checksum: crc32c 00:10:44.048 Number of devices: 1 00:10:44.048 Devices: 00:10:44.048 ID SIZE PATH 00:10:44.048 1 510.00MiB /dev/nvme0n1p1 00:10:44.048 00:10:44.048 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:44.048 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2166300 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:44.983 00:10:44.983 real 0m1.202s 00:10:44.983 user 0m0.033s 00:10:44.983 sys 0m0.109s 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.983 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:44.983 ************************************ 00:10:44.983 END TEST filesystem_btrfs 00:10:44.983 ************************************ 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.242 ************************************ 00:10:45.242 START TEST filesystem_xfs 00:10:45.242 ************************************ 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:45.242 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:45.242 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:45.242 = sectsz=512 attr=2, projid32bit=1 00:10:45.242 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:45.242 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:45.242 data = bsize=4096 blocks=130560, imaxpct=25 00:10:45.242 = sunit=0 swidth=0 blks 00:10:45.242 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:45.242 log =internal log bsize=4096 blocks=16384, version=2 00:10:45.242 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:45.242 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:46.177 Discarding blocks...Done. 00:10:46.177 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:46.177 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2166300 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:48.078 00:10:48.078 real 0m2.593s 00:10:48.078 user 0m0.022s 00:10:48.078 sys 0m0.076s 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:48.078 ************************************ 00:10:48.078 END TEST filesystem_xfs 00:10:48.078 ************************************ 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2166300 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2166300 ']' 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2166300 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2166300 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2166300' 00:10:48.078 killing process with pid 2166300 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2166300 00:10:48.078 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2166300 00:10:48.337 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:48.337 00:10:48.337 real 0m17.809s 00:10:48.337 user 1m10.083s 00:10:48.337 sys 0m1.427s 00:10:48.337 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.337 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.337 ************************************ 00:10:48.337 END TEST nvmf_filesystem_no_in_capsule 00:10:48.337 ************************************ 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.595 ************************************ 00:10:48.595 START TEST nvmf_filesystem_in_capsule 00:10:48.595 ************************************ 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2169482 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2169482 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2169482 ']' 00:10:48.595 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.596 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.596 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.596 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.596 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.596 [2024-11-19 11:22:02.231809] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:48.596 [2024-11-19 11:22:02.231850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.596 [2024-11-19 11:22:02.310645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.596 [2024-11-19 11:22:02.353064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.596 [2024-11-19 11:22:02.353101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.596 [2024-11-19 11:22:02.353108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.596 [2024-11-19 11:22:02.353114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.596 [2024-11-19 11:22:02.353119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.596 [2024-11-19 11:22:02.354725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.596 [2024-11-19 11:22:02.354826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.596 [2024-11-19 11:22:02.354932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.596 [2024-11-19 11:22:02.354933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.854 [2024-11-19 11:22:02.500422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.854 Malloc1 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.854 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.113 [2024-11-19 11:22:02.649038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:49.113 { 00:10:49.113 "name": "Malloc1", 00:10:49.113 "aliases": [ 00:10:49.113 "8f686c38-d315-435c-9a21-7c7382ce60f3" 00:10:49.113 ], 00:10:49.113 "product_name": "Malloc disk", 00:10:49.113 "block_size": 512, 00:10:49.113 "num_blocks": 1048576, 00:10:49.113 "uuid": "8f686c38-d315-435c-9a21-7c7382ce60f3", 00:10:49.113 "assigned_rate_limits": { 00:10:49.113 "rw_ios_per_sec": 0, 00:10:49.113 "rw_mbytes_per_sec": 0, 00:10:49.113 "r_mbytes_per_sec": 0, 00:10:49.113 "w_mbytes_per_sec": 0 00:10:49.113 }, 00:10:49.113 "claimed": true, 00:10:49.113 "claim_type": "exclusive_write", 00:10:49.113 "zoned": false, 00:10:49.113 "supported_io_types": { 00:10:49.113 "read": true, 00:10:49.113 "write": true, 00:10:49.113 "unmap": true, 00:10:49.113 "flush": true, 00:10:49.113 "reset": true, 00:10:49.113 "nvme_admin": false, 00:10:49.113 "nvme_io": false, 00:10:49.113 "nvme_io_md": false, 00:10:49.113 "write_zeroes": true, 00:10:49.113 "zcopy": true, 00:10:49.113 "get_zone_info": false, 00:10:49.113 "zone_management": false, 00:10:49.113 "zone_append": false, 00:10:49.113 "compare": false, 00:10:49.113 "compare_and_write": false, 00:10:49.113 "abort": true, 00:10:49.113 "seek_hole": false, 00:10:49.113 "seek_data": false, 00:10:49.113 "copy": true, 00:10:49.113 "nvme_iov_md": false 00:10:49.113 }, 00:10:49.113 "memory_domains": [ 00:10:49.113 { 00:10:49.113 "dma_device_id": "system", 00:10:49.113 "dma_device_type": 1 00:10:49.113 }, 00:10:49.113 { 00:10:49.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.113 "dma_device_type": 2 00:10:49.113 } 00:10:49.113 ], 00:10:49.113 "driver_specific": {} 00:10:49.113 } 00:10:49.113 ]' 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:49.113 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:50.610 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:50.610 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:50.610 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:50.610 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:50.610 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:52.525 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:52.525 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:52.525 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:52.525 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:53.092 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:54.027 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:54.027 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:54.027 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:54.027 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.027 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.287 ************************************ 00:10:54.287 START TEST filesystem_in_capsule_ext4 00:10:54.287 ************************************ 00:10:54.287 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:54.287 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:54.287 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:54.287 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:54.287 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:54.287 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:54.287 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:54.287 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:54.287 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:54.287 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:54.287 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:54.287 mke2fs 1.47.0 (5-Feb-2023) 00:10:54.287 Discarding device blocks: 0/522240 done 00:10:54.287 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:54.287 Filesystem UUID: b1cb2153-e1f0-4b41-b9e2-2c67f3d48f04 00:10:54.287 Superblock backups stored on blocks: 00:10:54.287 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:54.287 00:10:54.287 Allocating group tables: 0/64 done 00:10:54.287 Writing inode tables: 0/64 done 00:10:54.287 Creating journal (8192 blocks): done 00:10:54.287 Writing superblocks and filesystem accounting information: 0/64 done 00:10:54.287 00:10:54.287 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:54.287 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:00.846 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2169482 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:00.846 00:11:00.846 real 0m6.234s 00:11:00.846 user 0m0.017s 00:11:00.846 sys 0m0.080s 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:00.846 ************************************ 00:11:00.846 END TEST filesystem_in_capsule_ext4 00:11:00.846 ************************************ 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.846 ************************************ 00:11:00.846 START TEST filesystem_in_capsule_btrfs 00:11:00.846 ************************************ 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:00.846 btrfs-progs v6.8.1 00:11:00.846 See https://btrfs.readthedocs.io for more information. 00:11:00.846 00:11:00.846 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:00.846 NOTE: several default settings have changed in version 5.15, please make sure 00:11:00.846 this does not affect your deployments: 00:11:00.846 - DUP for metadata (-m dup) 00:11:00.846 - enabled no-holes (-O no-holes) 00:11:00.846 - enabled free-space-tree (-R free-space-tree) 00:11:00.846 00:11:00.846 Label: (null) 00:11:00.846 UUID: f4eda537-f883-4dc4-b444-b2e58b8d98df 00:11:00.846 Node size: 16384 00:11:00.846 Sector size: 4096 (CPU page size: 4096) 00:11:00.846 Filesystem size: 510.00MiB 00:11:00.846 Block group profiles: 00:11:00.846 Data: single 8.00MiB 00:11:00.846 Metadata: DUP 32.00MiB 00:11:00.846 System: DUP 8.00MiB 00:11:00.846 SSD detected: yes 00:11:00.846 Zoned device: no 00:11:00.846 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:00.846 Checksum: crc32c 00:11:00.846 Number of devices: 1 00:11:00.846 Devices: 00:11:00.846 ID SIZE PATH 00:11:00.846 1 510.00MiB /dev/nvme0n1p1 00:11:00.846 00:11:00.846 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:00.847 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.413 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.413 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:01.413 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.413 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2169482 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.671 00:11:01.671 real 0m1.105s 00:11:01.671 user 0m0.019s 00:11:01.671 sys 0m0.120s 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:01.671 ************************************ 00:11:01.671 END TEST filesystem_in_capsule_btrfs 00:11:01.671 ************************************ 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:01.671 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.672 ************************************ 00:11:01.672 START TEST filesystem_in_capsule_xfs 00:11:01.672 ************************************ 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:01.672 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:01.672 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:01.672 = sectsz=512 attr=2, projid32bit=1 00:11:01.672 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:01.672 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:01.672 data = bsize=4096 blocks=130560, imaxpct=25 00:11:01.672 = sunit=0 swidth=0 blks 00:11:01.672 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:01.672 log =internal log bsize=4096 blocks=16384, version=2 00:11:01.672 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:01.672 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:02.607 Discarding blocks...Done. 00:11:02.607 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:02.607 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.507 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.507 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2169482 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:04.508 00:11:04.508 real 0m2.601s 00:11:04.508 user 0m0.024s 00:11:04.508 sys 0m0.074s 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:04.508 ************************************ 00:11:04.508 END TEST filesystem_in_capsule_xfs 00:11:04.508 ************************************ 00:11:04.508 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:04.508 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:04.508 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2169482 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2169482 ']' 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2169482 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2169482 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2169482' 00:11:04.766 killing process with pid 2169482 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2169482 00:11:04.766 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2169482 00:11:05.025 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:05.025 00:11:05.025 real 0m16.558s 00:11:05.025 user 1m5.120s 00:11:05.025 sys 0m1.402s 00:11:05.025 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.025 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.025 ************************************ 00:11:05.025 END TEST nvmf_filesystem_in_capsule 00:11:05.025 ************************************ 00:11:05.025 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:05.025 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:05.025 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:05.025 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.025 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:05.025 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.025 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.025 rmmod nvme_tcp 00:11:05.025 rmmod nvme_fabrics 00:11:05.285 rmmod nvme_keyring 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.285 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.203 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:07.203 00:11:07.203 real 0m43.115s 00:11:07.203 user 2m17.327s 00:11:07.203 sys 0m7.488s 00:11:07.203 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.203 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:07.203 ************************************ 00:11:07.203 END TEST nvmf_filesystem 00:11:07.203 ************************************ 00:11:07.203 11:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:07.203 11:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:07.203 11:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.203 11:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:07.203 ************************************ 00:11:07.203 START TEST nvmf_target_discovery 00:11:07.203 ************************************ 00:11:07.203 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:07.462 * Looking for test storage... 00:11:07.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.462 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.462 --rc genhtml_branch_coverage=1 00:11:07.462 --rc genhtml_function_coverage=1 00:11:07.462 --rc genhtml_legend=1 00:11:07.462 --rc geninfo_all_blocks=1 00:11:07.463 --rc geninfo_unexecuted_blocks=1 00:11:07.463 00:11:07.463 ' 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:07.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.463 --rc genhtml_branch_coverage=1 00:11:07.463 --rc genhtml_function_coverage=1 00:11:07.463 --rc genhtml_legend=1 00:11:07.463 --rc geninfo_all_blocks=1 00:11:07.463 --rc geninfo_unexecuted_blocks=1 00:11:07.463 00:11:07.463 ' 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:07.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.463 --rc genhtml_branch_coverage=1 00:11:07.463 --rc genhtml_function_coverage=1 00:11:07.463 --rc genhtml_legend=1 00:11:07.463 --rc geninfo_all_blocks=1 00:11:07.463 --rc geninfo_unexecuted_blocks=1 00:11:07.463 00:11:07.463 ' 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:07.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.463 --rc genhtml_branch_coverage=1 00:11:07.463 --rc genhtml_function_coverage=1 00:11:07.463 --rc genhtml_legend=1 00:11:07.463 --rc geninfo_all_blocks=1 00:11:07.463 --rc geninfo_unexecuted_blocks=1 00:11:07.463 00:11:07.463 ' 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:07.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:07.463 11:22:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.031 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:14.032 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:14.032 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:14.032 Found net devices under 0000:86:00.0: cvl_0_0 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:14.032 Found net devices under 0000:86:00.1: cvl_0_1 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.032 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:11:14.032 00:11:14.032 --- 10.0.0.2 ping statistics --- 00:11:14.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.032 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:11:14.032 00:11:14.032 --- 10.0.0.1 ping statistics --- 00:11:14.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.032 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2175992 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2175992 00:11:14.032 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2175992 ']' 00:11:14.033 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.033 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.033 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.033 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.033 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.033 [2024-11-19 11:22:27.240438] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:14.033 [2024-11-19 11:22:27.240483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.033 [2024-11-19 11:22:27.321541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.033 [2024-11-19 11:22:27.364751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.033 [2024-11-19 11:22:27.364787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.033 [2024-11-19 11:22:27.364794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.033 [2024-11-19 11:22:27.364804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.033 [2024-11-19 11:22:27.364809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.033 [2024-11-19 11:22:27.366415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.033 [2024-11-19 11:22:27.366540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.033 [2024-11-19 11:22:27.366560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.033 [2024-11-19 11:22:27.366562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 [2024-11-19 11:22:28.130067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 Null1 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 [2024-11-19 11:22:28.175569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 Null2 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 Null3 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.600 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.601 Null4 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.601 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:14.859 00:11:14.859 Discovery Log Number of Records 6, Generation counter 6 00:11:14.859 =====Discovery Log Entry 0====== 00:11:14.859 trtype: tcp 00:11:14.859 adrfam: ipv4 00:11:14.859 subtype: current discovery subsystem 00:11:14.859 treq: not required 00:11:14.859 portid: 0 00:11:14.859 trsvcid: 4420 00:11:14.859 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:14.859 traddr: 10.0.0.2 00:11:14.859 eflags: explicit discovery connections, duplicate discovery information 00:11:14.859 sectype: none 00:11:14.859 =====Discovery Log Entry 1====== 00:11:14.859 trtype: tcp 00:11:14.859 adrfam: ipv4 00:11:14.859 subtype: nvme subsystem 00:11:14.859 treq: not required 00:11:14.859 portid: 0 00:11:14.859 trsvcid: 4420 00:11:14.859 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:14.859 traddr: 10.0.0.2 00:11:14.859 eflags: none 00:11:14.859 sectype: none 00:11:14.859 =====Discovery Log Entry 2====== 00:11:14.859 trtype: tcp 00:11:14.859 adrfam: ipv4 00:11:14.859 subtype: nvme subsystem 00:11:14.859 treq: not required 00:11:14.859 portid: 0 00:11:14.859 trsvcid: 4420 00:11:14.859 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:14.859 traddr: 10.0.0.2 00:11:14.859 eflags: none 00:11:14.859 sectype: none 00:11:14.859 =====Discovery Log Entry 3====== 00:11:14.859 trtype: tcp 00:11:14.859 adrfam: ipv4 00:11:14.859 subtype: nvme subsystem 00:11:14.859 treq: not required 00:11:14.859 portid: 0 00:11:14.859 trsvcid: 4420 00:11:14.859 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:14.859 traddr: 10.0.0.2 00:11:14.859 eflags: none 00:11:14.859 sectype: none 00:11:14.859 =====Discovery Log Entry 4====== 00:11:14.859 trtype: tcp 00:11:14.859 adrfam: ipv4 00:11:14.859 subtype: nvme subsystem 00:11:14.859 treq: not required 00:11:14.859 portid: 0 00:11:14.859 trsvcid: 4420 00:11:14.859 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:14.859 traddr: 10.0.0.2 00:11:14.859 eflags: none 00:11:14.859 sectype: none 00:11:14.859 =====Discovery Log Entry 5====== 00:11:14.859 trtype: tcp 00:11:14.859 adrfam: ipv4 00:11:14.859 subtype: discovery subsystem referral 00:11:14.859 treq: not required 00:11:14.859 portid: 0 00:11:14.859 trsvcid: 4430 00:11:14.859 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:14.859 traddr: 10.0.0.2 00:11:14.859 eflags: none 00:11:14.859 sectype: none 00:11:14.859 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:14.859 Perform nvmf subsystem discovery via RPC 00:11:14.859 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:14.859 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.859 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 [ 00:11:14.859 { 00:11:14.859 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:14.859 "subtype": "Discovery", 00:11:14.859 "listen_addresses": [ 00:11:14.859 { 00:11:14.859 "trtype": "TCP", 00:11:14.859 "adrfam": "IPv4", 00:11:14.859 "traddr": "10.0.0.2", 00:11:14.859 "trsvcid": "4420" 00:11:14.859 } 00:11:14.859 ], 00:11:14.859 "allow_any_host": true, 00:11:14.859 "hosts": [] 00:11:14.859 }, 00:11:14.859 { 00:11:14.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.860 "subtype": "NVMe", 00:11:14.860 "listen_addresses": [ 00:11:14.860 { 00:11:14.860 "trtype": "TCP", 00:11:14.860 "adrfam": "IPv4", 00:11:14.860 "traddr": "10.0.0.2", 00:11:14.860 "trsvcid": "4420" 00:11:14.860 } 00:11:14.860 ], 00:11:14.860 "allow_any_host": true, 00:11:14.860 "hosts": [], 00:11:14.860 "serial_number": "SPDK00000000000001", 00:11:14.860 "model_number": "SPDK bdev Controller", 00:11:14.860 "max_namespaces": 32, 00:11:14.860 "min_cntlid": 1, 00:11:14.860 "max_cntlid": 65519, 00:11:14.860 "namespaces": [ 00:11:14.860 { 00:11:14.860 "nsid": 1, 00:11:14.860 "bdev_name": "Null1", 00:11:14.860 "name": "Null1", 00:11:14.860 "nguid": "A26A65AE60AF40B5AA1090716CEB0F9D", 00:11:14.860 "uuid": "a26a65ae-60af-40b5-aa10-90716ceb0f9d" 00:11:14.860 } 00:11:14.860 ] 00:11:14.860 }, 00:11:14.860 { 00:11:14.860 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:14.860 "subtype": "NVMe", 00:11:14.860 "listen_addresses": [ 00:11:14.860 { 00:11:14.860 "trtype": "TCP", 00:11:14.860 "adrfam": "IPv4", 00:11:14.860 "traddr": "10.0.0.2", 00:11:14.860 "trsvcid": "4420" 00:11:14.860 } 00:11:14.860 ], 00:11:14.860 "allow_any_host": true, 00:11:14.860 "hosts": [], 00:11:14.860 "serial_number": "SPDK00000000000002", 00:11:14.860 "model_number": "SPDK bdev Controller", 00:11:14.860 "max_namespaces": 32, 00:11:14.860 "min_cntlid": 1, 00:11:14.860 "max_cntlid": 65519, 00:11:14.860 "namespaces": [ 00:11:14.860 { 00:11:14.860 "nsid": 1, 00:11:14.860 "bdev_name": "Null2", 00:11:14.860 "name": "Null2", 00:11:14.860 "nguid": "80068B22FAF547EA828407F1893A3674", 00:11:14.860 "uuid": "80068b22-faf5-47ea-8284-07f1893a3674" 00:11:14.860 } 00:11:14.860 ] 00:11:14.860 }, 00:11:14.860 { 00:11:14.860 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:14.860 "subtype": "NVMe", 00:11:14.860 "listen_addresses": [ 00:11:14.860 { 00:11:14.860 "trtype": "TCP", 00:11:14.860 "adrfam": "IPv4", 00:11:14.860 "traddr": "10.0.0.2", 00:11:14.860 "trsvcid": "4420" 00:11:14.860 } 00:11:14.860 ], 00:11:14.860 "allow_any_host": true, 00:11:14.860 "hosts": [], 00:11:14.860 "serial_number": "SPDK00000000000003", 00:11:14.860 "model_number": "SPDK bdev Controller", 00:11:14.860 "max_namespaces": 32, 00:11:14.860 "min_cntlid": 1, 00:11:14.860 "max_cntlid": 65519, 00:11:14.860 "namespaces": [ 00:11:14.860 { 00:11:14.860 "nsid": 1, 00:11:14.860 "bdev_name": "Null3", 00:11:14.860 "name": "Null3", 00:11:14.860 "nguid": "006158D671DA4C1E93B3E72958D3E092", 00:11:14.860 "uuid": "006158d6-71da-4c1e-93b3-e72958d3e092" 00:11:14.860 } 00:11:14.860 ] 00:11:14.860 }, 00:11:14.860 { 00:11:14.860 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:14.860 "subtype": "NVMe", 00:11:14.860 "listen_addresses": [ 00:11:14.860 { 00:11:14.860 "trtype": "TCP", 00:11:14.860 "adrfam": "IPv4", 00:11:14.860 "traddr": "10.0.0.2", 00:11:14.860 "trsvcid": "4420" 00:11:14.860 } 00:11:14.860 ], 00:11:14.860 "allow_any_host": true, 00:11:14.860 "hosts": [], 00:11:14.860 "serial_number": "SPDK00000000000004", 00:11:14.860 "model_number": "SPDK bdev Controller", 00:11:14.860 "max_namespaces": 32, 00:11:14.860 "min_cntlid": 1, 00:11:14.860 "max_cntlid": 65519, 00:11:14.860 "namespaces": [ 00:11:14.860 { 00:11:14.860 "nsid": 1, 00:11:14.860 "bdev_name": "Null4", 00:11:14.860 "name": "Null4", 00:11:14.860 "nguid": "89708D9212CC4586A8FA80E48405F022", 00:11:14.860 "uuid": "89708d92-12cc-4586-a8fa-80e48405f022" 00:11:14.860 } 00:11:14.860 ] 00:11:14.860 } 00:11:14.860 ] 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.860 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.120 rmmod nvme_tcp 00:11:15.120 rmmod nvme_fabrics 00:11:15.120 rmmod nvme_keyring 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2175992 ']' 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2175992 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2175992 ']' 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2175992 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2175992 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2175992' 00:11:15.120 killing process with pid 2175992 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2175992 00:11:15.120 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2175992 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.379 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.290 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:17.290 00:11:17.290 real 0m10.032s 00:11:17.290 user 0m8.326s 00:11:17.290 sys 0m4.905s 00:11:17.290 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.290 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.290 ************************************ 00:11:17.290 END TEST nvmf_target_discovery 00:11:17.290 ************************************ 00:11:17.290 11:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:17.290 11:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.290 11:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.290 11:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.550 ************************************ 00:11:17.550 START TEST nvmf_referrals 00:11:17.550 ************************************ 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:17.550 * Looking for test storage... 00:11:17.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:17.550 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:17.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.551 --rc genhtml_branch_coverage=1 00:11:17.551 --rc genhtml_function_coverage=1 00:11:17.551 --rc genhtml_legend=1 00:11:17.551 --rc geninfo_all_blocks=1 00:11:17.551 --rc geninfo_unexecuted_blocks=1 00:11:17.551 00:11:17.551 ' 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:17.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.551 --rc genhtml_branch_coverage=1 00:11:17.551 --rc genhtml_function_coverage=1 00:11:17.551 --rc genhtml_legend=1 00:11:17.551 --rc geninfo_all_blocks=1 00:11:17.551 --rc geninfo_unexecuted_blocks=1 00:11:17.551 00:11:17.551 ' 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:17.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.551 --rc genhtml_branch_coverage=1 00:11:17.551 --rc genhtml_function_coverage=1 00:11:17.551 --rc genhtml_legend=1 00:11:17.551 --rc geninfo_all_blocks=1 00:11:17.551 --rc geninfo_unexecuted_blocks=1 00:11:17.551 00:11:17.551 ' 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:17.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.551 --rc genhtml_branch_coverage=1 00:11:17.551 --rc genhtml_function_coverage=1 00:11:17.551 --rc genhtml_legend=1 00:11:17.551 --rc geninfo_all_blocks=1 00:11:17.551 --rc geninfo_unexecuted_blocks=1 00:11:17.551 00:11:17.551 ' 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.551 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.552 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:24.123 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:24.123 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:24.123 Found net devices under 0000:86:00.0: cvl_0_0 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:24.123 Found net devices under 0000:86:00.1: cvl_0_1 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.123 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:11:24.124 00:11:24.124 --- 10.0.0.2 ping statistics --- 00:11:24.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.124 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:11:24.124 00:11:24.124 --- 10.0.0.1 ping statistics --- 00:11:24.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.124 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2179781 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2179781 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2179781 ']' 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.124 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.124 [2024-11-19 11:22:37.321752] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:24.124 [2024-11-19 11:22:37.321796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.124 [2024-11-19 11:22:37.400861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.124 [2024-11-19 11:22:37.441407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.124 [2024-11-19 11:22:37.441447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.124 [2024-11-19 11:22:37.441454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.124 [2024-11-19 11:22:37.441460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.124 [2024-11-19 11:22:37.441465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.124 [2024-11-19 11:22:37.443083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.124 [2024-11-19 11:22:37.443189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.124 [2024-11-19 11:22:37.443302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.124 [2024-11-19 11:22:37.443304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 [2024-11-19 11:22:38.206398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 [2024-11-19 11:22:38.219890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:24.690 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.691 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.948 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.948 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:24.949 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:25.207 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.465 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:25.465 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:25.465 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:25.465 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:25.465 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.465 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.723 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:25.981 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:25.981 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:25.981 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:25.981 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:25.981 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.981 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.239 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:26.240 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.498 rmmod nvme_tcp 00:11:26.498 rmmod nvme_fabrics 00:11:26.498 rmmod nvme_keyring 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2179781 ']' 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2179781 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2179781 ']' 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2179781 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2179781 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2179781' 00:11:26.498 killing process with pid 2179781 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2179781 00:11:26.498 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2179781 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.757 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.296 00:11:29.296 real 0m11.370s 00:11:29.296 user 0m14.472s 00:11:29.296 sys 0m5.251s 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.296 ************************************ 00:11:29.296 END TEST nvmf_referrals 00:11:29.296 ************************************ 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.296 ************************************ 00:11:29.296 START TEST nvmf_connect_disconnect 00:11:29.296 ************************************ 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:29.296 * Looking for test storage... 00:11:29.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.296 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:29.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.297 --rc genhtml_branch_coverage=1 00:11:29.297 --rc genhtml_function_coverage=1 00:11:29.297 --rc genhtml_legend=1 00:11:29.297 --rc geninfo_all_blocks=1 00:11:29.297 --rc geninfo_unexecuted_blocks=1 00:11:29.297 00:11:29.297 ' 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:29.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.297 --rc genhtml_branch_coverage=1 00:11:29.297 --rc genhtml_function_coverage=1 00:11:29.297 --rc genhtml_legend=1 00:11:29.297 --rc geninfo_all_blocks=1 00:11:29.297 --rc geninfo_unexecuted_blocks=1 00:11:29.297 00:11:29.297 ' 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:29.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.297 --rc genhtml_branch_coverage=1 00:11:29.297 --rc genhtml_function_coverage=1 00:11:29.297 --rc genhtml_legend=1 00:11:29.297 --rc geninfo_all_blocks=1 00:11:29.297 --rc geninfo_unexecuted_blocks=1 00:11:29.297 00:11:29.297 ' 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:29.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.297 --rc genhtml_branch_coverage=1 00:11:29.297 --rc genhtml_function_coverage=1 00:11:29.297 --rc genhtml_legend=1 00:11:29.297 --rc geninfo_all_blocks=1 00:11:29.297 --rc geninfo_unexecuted_blocks=1 00:11:29.297 00:11:29.297 ' 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.297 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:35.867 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.867 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:35.868 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:35.868 Found net devices under 0000:86:00.0: cvl_0_0 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:35.868 Found net devices under 0000:86:00.1: cvl_0_1 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:11:35.868 00:11:35.868 --- 10.0.0.2 ping statistics --- 00:11:35.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.868 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:11:35.868 00:11:35.868 --- 10.0.0.1 ping statistics --- 00:11:35.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.868 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2183867 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2183867 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2183867 ']' 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.868 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.868 [2024-11-19 11:22:48.813951] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:35.868 [2024-11-19 11:22:48.813995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.868 [2024-11-19 11:22:48.895453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.868 [2024-11-19 11:22:48.938030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.868 [2024-11-19 11:22:48.938067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.868 [2024-11-19 11:22:48.938074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.868 [2024-11-19 11:22:48.938080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.868 [2024-11-19 11:22:48.938085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.868 [2024-11-19 11:22:48.939516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.869 [2024-11-19 11:22:48.939629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.869 [2024-11-19 11:22:48.939736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.869 [2024-11-19 11:22:48.939737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.869 [2024-11-19 11:22:49.077222] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.869 [2024-11-19 11:22:49.134362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:35.869 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:39.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.302 rmmod nvme_tcp 00:11:52.302 rmmod nvme_fabrics 00:11:52.302 rmmod nvme_keyring 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2183867 ']' 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2183867 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2183867 ']' 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2183867 00:11:52.302 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2183867 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2183867' 00:11:52.303 killing process with pid 2183867 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2183867 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2183867 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.303 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.272 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:54.272 00:11:54.272 real 0m25.270s 00:11:54.272 user 1m8.389s 00:11:54.272 sys 0m5.869s 00:11:54.272 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.272 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.272 ************************************ 00:11:54.272 END TEST nvmf_connect_disconnect 00:11:54.272 ************************************ 00:11:54.272 11:23:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:54.272 11:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.272 11:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.272 11:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:54.272 ************************************ 00:11:54.272 START TEST nvmf_multitarget 00:11:54.272 ************************************ 00:11:54.272 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:54.272 * Looking for test storage... 00:11:54.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:54.272 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:54.272 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:54.272 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:54.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.272 --rc genhtml_branch_coverage=1 00:11:54.272 --rc genhtml_function_coverage=1 00:11:54.272 --rc genhtml_legend=1 00:11:54.272 --rc geninfo_all_blocks=1 00:11:54.272 --rc geninfo_unexecuted_blocks=1 00:11:54.272 00:11:54.272 ' 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:54.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.272 --rc genhtml_branch_coverage=1 00:11:54.272 --rc genhtml_function_coverage=1 00:11:54.272 --rc genhtml_legend=1 00:11:54.272 --rc geninfo_all_blocks=1 00:11:54.272 --rc geninfo_unexecuted_blocks=1 00:11:54.272 00:11:54.272 ' 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:54.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.272 --rc genhtml_branch_coverage=1 00:11:54.272 --rc genhtml_function_coverage=1 00:11:54.272 --rc genhtml_legend=1 00:11:54.272 --rc geninfo_all_blocks=1 00:11:54.272 --rc geninfo_unexecuted_blocks=1 00:11:54.272 00:11:54.272 ' 00:11:54.272 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:54.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.272 --rc genhtml_branch_coverage=1 00:11:54.272 --rc genhtml_function_coverage=1 00:11:54.272 --rc genhtml_legend=1 00:11:54.272 --rc geninfo_all_blocks=1 00:11:54.272 --rc geninfo_unexecuted_blocks=1 00:11:54.272 00:11:54.272 ' 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.532 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:54.533 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.106 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:01.107 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:01.107 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:01.107 Found net devices under 0000:86:00.0: cvl_0_0 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:01.107 Found net devices under 0000:86:00.1: cvl_0_1 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:01.107 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:01.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:12:01.107 00:12:01.107 --- 10.0.0.2 ping statistics --- 00:12:01.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.107 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:12:01.107 00:12:01.107 --- 10.0.0.1 ping statistics --- 00:12:01.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.107 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2190259 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2190259 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.107 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2190259 ']' 00:12:01.108 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.108 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.108 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.108 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.108 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:01.108 [2024-11-19 11:23:14.154293] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:01.108 [2024-11-19 11:23:14.154336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.108 [2024-11-19 11:23:14.235931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.108 [2024-11-19 11:23:14.279334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.108 [2024-11-19 11:23:14.279369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.108 [2024-11-19 11:23:14.279376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.108 [2024-11-19 11:23:14.279382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.108 [2024-11-19 11:23:14.279388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.108 [2024-11-19 11:23:14.281033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.108 [2024-11-19 11:23:14.281052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.108 [2024-11-19 11:23:14.281139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.108 [2024-11-19 11:23:14.281140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.367 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.367 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:01.367 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:01.367 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.367 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:01.367 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.367 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:01.367 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:01.367 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:01.367 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:01.367 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:01.625 "nvmf_tgt_1" 00:12:01.625 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:01.625 "nvmf_tgt_2" 00:12:01.625 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:01.625 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:01.884 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:01.884 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:01.884 true 00:12:01.884 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:01.884 true 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.143 rmmod nvme_tcp 00:12:02.143 rmmod nvme_fabrics 00:12:02.143 rmmod nvme_keyring 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2190259 ']' 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2190259 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2190259 ']' 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2190259 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190259 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190259' 00:12:02.143 killing process with pid 2190259 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2190259 00:12:02.143 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2190259 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.403 11:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:04.941 00:12:04.941 real 0m10.239s 00:12:04.941 user 0m9.669s 00:12:04.941 sys 0m4.995s 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:04.941 ************************************ 00:12:04.941 END TEST nvmf_multitarget 00:12:04.941 ************************************ 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.941 ************************************ 00:12:04.941 START TEST nvmf_rpc 00:12:04.941 ************************************ 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:04.941 * Looking for test storage... 00:12:04.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.941 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:04.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.942 --rc genhtml_branch_coverage=1 00:12:04.942 --rc genhtml_function_coverage=1 00:12:04.942 --rc genhtml_legend=1 00:12:04.942 --rc geninfo_all_blocks=1 00:12:04.942 --rc geninfo_unexecuted_blocks=1 00:12:04.942 00:12:04.942 ' 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:04.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.942 --rc genhtml_branch_coverage=1 00:12:04.942 --rc genhtml_function_coverage=1 00:12:04.942 --rc genhtml_legend=1 00:12:04.942 --rc geninfo_all_blocks=1 00:12:04.942 --rc geninfo_unexecuted_blocks=1 00:12:04.942 00:12:04.942 ' 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:04.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.942 --rc genhtml_branch_coverage=1 00:12:04.942 --rc genhtml_function_coverage=1 00:12:04.942 --rc genhtml_legend=1 00:12:04.942 --rc geninfo_all_blocks=1 00:12:04.942 --rc geninfo_unexecuted_blocks=1 00:12:04.942 00:12:04.942 ' 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:04.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.942 --rc genhtml_branch_coverage=1 00:12:04.942 --rc genhtml_function_coverage=1 00:12:04.942 --rc genhtml_legend=1 00:12:04.942 --rc geninfo_all_blocks=1 00:12:04.942 --rc geninfo_unexecuted_blocks=1 00:12:04.942 00:12:04.942 ' 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:04.942 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:11.515 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:11.515 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.515 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:11.516 Found net devices under 0000:86:00.0: cvl_0_0 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:11.516 Found net devices under 0000:86:00.1: cvl_0_1 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:12:11.516 00:12:11.516 --- 10.0.0.2 ping statistics --- 00:12:11.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.516 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:11.516 00:12:11.516 --- 10.0.0.1 ping statistics --- 00:12:11.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.516 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2194049 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2194049 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2194049 ']' 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.516 [2024-11-19 11:23:24.438790] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:11.516 [2024-11-19 11:23:24.438841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.516 [2024-11-19 11:23:24.517904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.516 [2024-11-19 11:23:24.561285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.516 [2024-11-19 11:23:24.561321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.516 [2024-11-19 11:23:24.561328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.516 [2024-11-19 11:23:24.561334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.516 [2024-11-19 11:23:24.561339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.516 [2024-11-19 11:23:24.562845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.516 [2024-11-19 11:23:24.562970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.516 [2024-11-19 11:23:24.563034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.516 [2024-11-19 11:23:24.563035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.516 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:11.516 "tick_rate": 2300000000, 00:12:11.516 "poll_groups": [ 00:12:11.516 { 00:12:11.516 "name": "nvmf_tgt_poll_group_000", 00:12:11.516 "admin_qpairs": 0, 00:12:11.516 "io_qpairs": 0, 00:12:11.516 "current_admin_qpairs": 0, 00:12:11.516 "current_io_qpairs": 0, 00:12:11.516 "pending_bdev_io": 0, 00:12:11.516 "completed_nvme_io": 0, 00:12:11.516 "transports": [] 00:12:11.516 }, 00:12:11.516 { 00:12:11.516 "name": "nvmf_tgt_poll_group_001", 00:12:11.516 "admin_qpairs": 0, 00:12:11.516 "io_qpairs": 0, 00:12:11.516 "current_admin_qpairs": 0, 00:12:11.516 "current_io_qpairs": 0, 00:12:11.516 "pending_bdev_io": 0, 00:12:11.516 "completed_nvme_io": 0, 00:12:11.516 "transports": [] 00:12:11.517 }, 00:12:11.517 { 00:12:11.517 "name": "nvmf_tgt_poll_group_002", 00:12:11.517 "admin_qpairs": 0, 00:12:11.517 "io_qpairs": 0, 00:12:11.517 "current_admin_qpairs": 0, 00:12:11.517 "current_io_qpairs": 0, 00:12:11.517 "pending_bdev_io": 0, 00:12:11.517 "completed_nvme_io": 0, 00:12:11.517 "transports": [] 00:12:11.517 }, 00:12:11.517 { 00:12:11.517 "name": "nvmf_tgt_poll_group_003", 00:12:11.517 "admin_qpairs": 0, 00:12:11.517 "io_qpairs": 0, 00:12:11.517 "current_admin_qpairs": 0, 00:12:11.517 "current_io_qpairs": 0, 00:12:11.517 "pending_bdev_io": 0, 00:12:11.517 "completed_nvme_io": 0, 00:12:11.517 "transports": [] 00:12:11.517 } 00:12:11.517 ] 00:12:11.517 }' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 [2024-11-19 11:23:24.813228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:11.517 "tick_rate": 2300000000, 00:12:11.517 "poll_groups": [ 00:12:11.517 { 00:12:11.517 "name": "nvmf_tgt_poll_group_000", 00:12:11.517 "admin_qpairs": 0, 00:12:11.517 "io_qpairs": 0, 00:12:11.517 "current_admin_qpairs": 0, 00:12:11.517 "current_io_qpairs": 0, 00:12:11.517 "pending_bdev_io": 0, 00:12:11.517 "completed_nvme_io": 0, 00:12:11.517 "transports": [ 00:12:11.517 { 00:12:11.517 "trtype": "TCP" 00:12:11.517 } 00:12:11.517 ] 00:12:11.517 }, 00:12:11.517 { 00:12:11.517 "name": "nvmf_tgt_poll_group_001", 00:12:11.517 "admin_qpairs": 0, 00:12:11.517 "io_qpairs": 0, 00:12:11.517 "current_admin_qpairs": 0, 00:12:11.517 "current_io_qpairs": 0, 00:12:11.517 "pending_bdev_io": 0, 00:12:11.517 "completed_nvme_io": 0, 00:12:11.517 "transports": [ 00:12:11.517 { 00:12:11.517 "trtype": "TCP" 00:12:11.517 } 00:12:11.517 ] 00:12:11.517 }, 00:12:11.517 { 00:12:11.517 "name": "nvmf_tgt_poll_group_002", 00:12:11.517 "admin_qpairs": 0, 00:12:11.517 "io_qpairs": 0, 00:12:11.517 "current_admin_qpairs": 0, 00:12:11.517 "current_io_qpairs": 0, 00:12:11.517 "pending_bdev_io": 0, 00:12:11.517 "completed_nvme_io": 0, 00:12:11.517 "transports": [ 00:12:11.517 { 00:12:11.517 "trtype": "TCP" 00:12:11.517 } 00:12:11.517 ] 00:12:11.517 }, 00:12:11.517 { 00:12:11.517 "name": "nvmf_tgt_poll_group_003", 00:12:11.517 "admin_qpairs": 0, 00:12:11.517 "io_qpairs": 0, 00:12:11.517 "current_admin_qpairs": 0, 00:12:11.517 "current_io_qpairs": 0, 00:12:11.517 "pending_bdev_io": 0, 00:12:11.517 "completed_nvme_io": 0, 00:12:11.517 "transports": [ 00:12:11.517 { 00:12:11.517 "trtype": "TCP" 00:12:11.517 } 00:12:11.517 ] 00:12:11.517 } 00:12:11.517 ] 00:12:11.517 }' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 Malloc1 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.517 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 [2024-11-19 11:23:24.998799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:11.517 [2024-11-19 11:23:25.027290] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:11.517 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:11.517 could not add new controller: failed to write to nvme-fabrics device 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.518 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.452 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.452 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:12.452 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.452 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:12.452 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.981 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.982 [2024-11-19 11:23:28.349415] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:14.982 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:14.982 could not add new controller: failed to write to nvme-fabrics device 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.982 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.914 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.914 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:15.914 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.914 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:15.914 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:17.815 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:17.815 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:17.815 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.815 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:17.815 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.815 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:17.815 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.073 [2024-11-19 11:23:31.667698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.073 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.074 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.446 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.446 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:19.446 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.446 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:19.446 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.345 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.345 [2024-11-19 11:23:35.021075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.345 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.718 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.718 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:22.718 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.718 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:22.719 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:24.616 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:24.616 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:24.616 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.616 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:24.616 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.616 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:24.616 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.617 [2024-11-19 11:23:38.362686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.617 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.991 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.991 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:25.991 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.991 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:25.991 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:27.892 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:27.892 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:27.892 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.892 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:27.892 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.892 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:27.892 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.150 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.151 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.151 [2024-11-19 11:23:41.800712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.151 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.151 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:28.151 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.151 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.151 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.151 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:28.151 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.151 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.151 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.151 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.526 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.526 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:29.526 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.526 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:29.526 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:31.425 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:31.425 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:31.425 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.425 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:31.425 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.425 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:31.425 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.425 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.425 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:31.425 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:31.425 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.425 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:31.425 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.425 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.426 [2024-11-19 11:23:45.108206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.426 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.800 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.800 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:32.800 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.800 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:32.800 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.701 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.702 [2024-11-19 11:23:48.435995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.702 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.961 [2024-11-19 11:23:48.484142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.961 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 [2024-11-19 11:23:48.532269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 [2024-11-19 11:23:48.580442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 [2024-11-19 11:23:48.628608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:34.962 "tick_rate": 2300000000, 00:12:34.962 "poll_groups": [ 00:12:34.962 { 00:12:34.962 "name": "nvmf_tgt_poll_group_000", 00:12:34.962 "admin_qpairs": 2, 00:12:34.962 "io_qpairs": 168, 00:12:34.962 "current_admin_qpairs": 0, 00:12:34.962 "current_io_qpairs": 0, 00:12:34.962 "pending_bdev_io": 0, 00:12:34.962 "completed_nvme_io": 267, 00:12:34.962 "transports": [ 00:12:34.962 { 00:12:34.962 "trtype": "TCP" 00:12:34.962 } 00:12:34.962 ] 00:12:34.962 }, 00:12:34.962 { 00:12:34.962 "name": "nvmf_tgt_poll_group_001", 00:12:34.962 "admin_qpairs": 2, 00:12:34.962 "io_qpairs": 168, 00:12:34.962 "current_admin_qpairs": 0, 00:12:34.962 "current_io_qpairs": 0, 00:12:34.962 "pending_bdev_io": 0, 00:12:34.962 "completed_nvme_io": 219, 00:12:34.962 "transports": [ 00:12:34.962 { 00:12:34.962 "trtype": "TCP" 00:12:34.962 } 00:12:34.962 ] 00:12:34.962 }, 00:12:34.962 { 00:12:34.962 "name": "nvmf_tgt_poll_group_002", 00:12:34.962 "admin_qpairs": 1, 00:12:34.962 "io_qpairs": 168, 00:12:34.962 "current_admin_qpairs": 0, 00:12:34.962 "current_io_qpairs": 0, 00:12:34.962 "pending_bdev_io": 0, 00:12:34.962 "completed_nvme_io": 269, 00:12:34.962 "transports": [ 00:12:34.962 { 00:12:34.962 "trtype": "TCP" 00:12:34.962 } 00:12:34.962 ] 00:12:34.962 }, 00:12:34.962 { 00:12:34.962 "name": "nvmf_tgt_poll_group_003", 00:12:34.962 "admin_qpairs": 2, 00:12:34.962 "io_qpairs": 168, 00:12:34.962 "current_admin_qpairs": 0, 00:12:34.962 "current_io_qpairs": 0, 00:12:34.962 "pending_bdev_io": 0, 00:12:34.962 "completed_nvme_io": 267, 00:12:34.962 "transports": [ 00:12:34.962 { 00:12:34.962 "trtype": "TCP" 00:12:34.962 } 00:12:34.962 ] 00:12:34.962 } 00:12:34.962 ] 00:12:34.962 }' 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.962 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:34.963 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:34.963 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:34.963 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:34.963 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.222 rmmod nvme_tcp 00:12:35.222 rmmod nvme_fabrics 00:12:35.222 rmmod nvme_keyring 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2194049 ']' 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2194049 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2194049 ']' 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2194049 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2194049 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2194049' 00:12:35.222 killing process with pid 2194049 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2194049 00:12:35.222 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2194049 00:12:35.481 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.481 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.481 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.481 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:35.481 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:35.481 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.481 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.481 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.481 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.482 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.482 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.482 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.387 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.387 00:12:37.387 real 0m32.956s 00:12:37.387 user 1m39.414s 00:12:37.387 sys 0m6.480s 00:12:37.387 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.387 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.387 ************************************ 00:12:37.387 END TEST nvmf_rpc 00:12:37.387 ************************************ 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.647 ************************************ 00:12:37.647 START TEST nvmf_invalid 00:12:37.647 ************************************ 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:37.647 * Looking for test storage... 00:12:37.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.647 --rc genhtml_branch_coverage=1 00:12:37.647 --rc genhtml_function_coverage=1 00:12:37.647 --rc genhtml_legend=1 00:12:37.647 --rc geninfo_all_blocks=1 00:12:37.647 --rc geninfo_unexecuted_blocks=1 00:12:37.647 00:12:37.647 ' 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.647 --rc genhtml_branch_coverage=1 00:12:37.647 --rc genhtml_function_coverage=1 00:12:37.647 --rc genhtml_legend=1 00:12:37.647 --rc geninfo_all_blocks=1 00:12:37.647 --rc geninfo_unexecuted_blocks=1 00:12:37.647 00:12:37.647 ' 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.647 --rc genhtml_branch_coverage=1 00:12:37.647 --rc genhtml_function_coverage=1 00:12:37.647 --rc genhtml_legend=1 00:12:37.647 --rc geninfo_all_blocks=1 00:12:37.647 --rc geninfo_unexecuted_blocks=1 00:12:37.647 00:12:37.647 ' 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.647 --rc genhtml_branch_coverage=1 00:12:37.647 --rc genhtml_function_coverage=1 00:12:37.647 --rc genhtml_legend=1 00:12:37.647 --rc geninfo_all_blocks=1 00:12:37.647 --rc geninfo_unexecuted_blocks=1 00:12:37.647 00:12:37.647 ' 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.647 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.648 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.648 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.648 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.908 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:44.477 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:44.477 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:44.477 Found net devices under 0000:86:00.0: cvl_0_0 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.477 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:44.478 Found net devices under 0000:86:00.1: cvl_0_1 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:12:44.478 00:12:44.478 --- 10.0.0.2 ping statistics --- 00:12:44.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.478 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:12:44.478 00:12:44.478 --- 10.0.0.1 ping statistics --- 00:12:44.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.478 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2201773 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2201773 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2201773 ']' 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.478 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:44.478 [2024-11-19 11:23:57.446419] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:44.478 [2024-11-19 11:23:57.446467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.478 [2024-11-19 11:23:57.525808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.478 [2024-11-19 11:23:57.566795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.478 [2024-11-19 11:23:57.566833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.478 [2024-11-19 11:23:57.566840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.478 [2024-11-19 11:23:57.566846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.478 [2024-11-19 11:23:57.566851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.478 [2024-11-19 11:23:57.568472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.478 [2024-11-19 11:23:57.568579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.478 [2024-11-19 11:23:57.568688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.478 [2024-11-19 11:23:57.568689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.737 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.737 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:44.737 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.737 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.737 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:44.737 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.737 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:44.737 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20835 00:12:44.737 [2024-11-19 11:23:58.488285] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:44.996 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:44.996 { 00:12:44.996 "nqn": "nqn.2016-06.io.spdk:cnode20835", 00:12:44.996 "tgt_name": "foobar", 00:12:44.996 "method": "nvmf_create_subsystem", 00:12:44.996 "req_id": 1 00:12:44.996 } 00:12:44.996 Got JSON-RPC error response 00:12:44.996 response: 00:12:44.996 { 00:12:44.996 "code": -32603, 00:12:44.996 "message": "Unable to find target foobar" 00:12:44.996 }' 00:12:44.996 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:44.996 { 00:12:44.996 "nqn": "nqn.2016-06.io.spdk:cnode20835", 00:12:44.996 "tgt_name": "foobar", 00:12:44.996 "method": "nvmf_create_subsystem", 00:12:44.996 "req_id": 1 00:12:44.996 } 00:12:44.996 Got JSON-RPC error response 00:12:44.996 response: 00:12:44.996 { 00:12:44.996 "code": -32603, 00:12:44.996 "message": "Unable to find target foobar" 00:12:44.996 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:44.996 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:44.996 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19208 00:12:44.996 [2024-11-19 11:23:58.697052] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19208: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:44.996 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:44.996 { 00:12:44.996 "nqn": "nqn.2016-06.io.spdk:cnode19208", 00:12:44.996 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:44.996 "method": "nvmf_create_subsystem", 00:12:44.996 "req_id": 1 00:12:44.996 } 00:12:44.996 Got JSON-RPC error response 00:12:44.996 response: 00:12:44.996 { 00:12:44.996 "code": -32602, 00:12:44.996 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:44.996 }' 00:12:44.996 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:44.996 { 00:12:44.996 "nqn": "nqn.2016-06.io.spdk:cnode19208", 00:12:44.996 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:44.996 "method": "nvmf_create_subsystem", 00:12:44.996 "req_id": 1 00:12:44.996 } 00:12:44.996 Got JSON-RPC error response 00:12:44.996 response: 00:12:44.996 { 00:12:44.996 "code": -32602, 00:12:44.996 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:44.996 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:44.996 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:44.996 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11447 00:12:45.255 [2024-11-19 11:23:58.913754] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11447: invalid model number 'SPDK_Controller' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:45.255 { 00:12:45.255 "nqn": "nqn.2016-06.io.spdk:cnode11447", 00:12:45.255 "model_number": "SPDK_Controller\u001f", 00:12:45.255 "method": "nvmf_create_subsystem", 00:12:45.255 "req_id": 1 00:12:45.255 } 00:12:45.255 Got JSON-RPC error response 00:12:45.255 response: 00:12:45.255 { 00:12:45.255 "code": -32602, 00:12:45.255 "message": "Invalid MN SPDK_Controller\u001f" 00:12:45.255 }' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:45.255 { 00:12:45.255 "nqn": "nqn.2016-06.io.spdk:cnode11447", 00:12:45.255 "model_number": "SPDK_Controller\u001f", 00:12:45.255 "method": "nvmf_create_subsystem", 00:12:45.255 "req_id": 1 00:12:45.255 } 00:12:45.255 Got JSON-RPC error response 00:12:45.255 response: 00:12:45.255 { 00:12:45.255 "code": -32602, 00:12:45.255 "message": "Invalid MN SPDK_Controller\u001f" 00:12:45.255 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:45.255 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.255 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '0I~Y][pI0?&%n]/tkviC' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '0I~Y][pI0?&%n]/tkviC' nqn.2016-06.io.spdk:cnode5685 00:12:45.514 [2024-11-19 11:23:59.258893] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5685: invalid serial number '0I~Y][pI0?&%n]/tkviC' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:45.514 { 00:12:45.514 "nqn": "nqn.2016-06.io.spdk:cnode5685", 00:12:45.514 "serial_number": "0I~Y][pI0?&%n]/t\u007fkviC", 00:12:45.514 "method": "nvmf_create_subsystem", 00:12:45.514 "req_id": 1 00:12:45.514 } 00:12:45.514 Got JSON-RPC error response 00:12:45.514 response: 00:12:45.514 { 00:12:45.514 "code": -32602, 00:12:45.514 "message": "Invalid SN 0I~Y][pI0?&%n]/t\u007fkviC" 00:12:45.514 }' 00:12:45.514 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:45.514 { 00:12:45.514 "nqn": "nqn.2016-06.io.spdk:cnode5685", 00:12:45.514 "serial_number": "0I~Y][pI0?&%n]/t\u007fkviC", 00:12:45.514 "method": "nvmf_create_subsystem", 00:12:45.514 "req_id": 1 00:12:45.515 } 00:12:45.515 Got JSON-RPC error response 00:12:45.515 response: 00:12:45.515 { 00:12:45.515 "code": -32602, 00:12:45.515 "message": "Invalid SN 0I~Y][pI0?&%n]/t\u007fkviC" 00:12:45.515 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:45.774 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:45.775 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.776 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.034 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:46.034 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:46.034 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:46.034 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.035 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.035 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ V == \- ]] 00:12:46.035 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'V4BHE2+r5ApZ V/4*P(>>7QGTi&OKgnp&t9 {lw9' 00:12:46.035 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'V4BHE2+r5ApZ V/4*P(>>7QGTi&OKgnp&t9 {lw9' nqn.2016-06.io.spdk:cnode6107 00:12:46.035 [2024-11-19 11:23:59.728477] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6107: invalid model number 'V4BHE2+r5ApZ V/4*P(>>7QGTi&OKgnp&t9 {lw9' 00:12:46.035 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:46.035 { 00:12:46.035 "nqn": "nqn.2016-06.io.spdk:cnode6107", 00:12:46.035 "model_number": "V4BHE2+\u007fr5ApZ V/4*P(>>7QGTi&OKgnp&t9 {lw9", 00:12:46.035 "method": "nvmf_create_subsystem", 00:12:46.035 "req_id": 1 00:12:46.035 } 00:12:46.035 Got JSON-RPC error response 00:12:46.035 response: 00:12:46.035 { 00:12:46.035 "code": -32602, 00:12:46.035 "message": "Invalid MN V4BHE2+\u007fr5ApZ V/4*P(>>7QGTi&OKgnp&t9 {lw9" 00:12:46.035 }' 00:12:46.035 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:46.035 { 00:12:46.035 "nqn": "nqn.2016-06.io.spdk:cnode6107", 00:12:46.035 "model_number": "V4BHE2+\u007fr5ApZ V/4*P(>>7QGTi&OKgnp&t9 {lw9", 00:12:46.035 "method": "nvmf_create_subsystem", 00:12:46.035 "req_id": 1 00:12:46.035 } 00:12:46.035 Got JSON-RPC error response 00:12:46.035 response: 00:12:46.035 { 00:12:46.035 "code": -32602, 00:12:46.035 "message": "Invalid MN V4BHE2+\u007fr5ApZ V/4*P(>>7QGTi&OKgnp&t9 {lw9" 00:12:46.035 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:46.035 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:46.292 [2024-11-19 11:23:59.929233] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.292 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:46.550 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:46.550 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:46.550 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:46.550 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:46.550 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:46.809 [2024-11-19 11:24:00.362657] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:46.809 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:46.809 { 00:12:46.809 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:46.809 "listen_address": { 00:12:46.809 "trtype": "tcp", 00:12:46.809 "traddr": "", 00:12:46.809 "trsvcid": "4421" 00:12:46.809 }, 00:12:46.809 "method": "nvmf_subsystem_remove_listener", 00:12:46.809 "req_id": 1 00:12:46.809 } 00:12:46.809 Got JSON-RPC error response 00:12:46.809 response: 00:12:46.809 { 00:12:46.809 "code": -32602, 00:12:46.809 "message": "Invalid parameters" 00:12:46.809 }' 00:12:46.809 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:46.809 { 00:12:46.809 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:46.809 "listen_address": { 00:12:46.809 "trtype": "tcp", 00:12:46.809 "traddr": "", 00:12:46.809 "trsvcid": "4421" 00:12:46.809 }, 00:12:46.809 "method": "nvmf_subsystem_remove_listener", 00:12:46.809 "req_id": 1 00:12:46.809 } 00:12:46.809 Got JSON-RPC error response 00:12:46.809 response: 00:12:46.809 { 00:12:46.809 "code": -32602, 00:12:46.809 "message": "Invalid parameters" 00:12:46.809 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:46.809 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25940 -i 0 00:12:46.809 [2024-11-19 11:24:00.575352] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25940: invalid cntlid range [0-65519] 00:12:47.067 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:47.067 { 00:12:47.067 "nqn": "nqn.2016-06.io.spdk:cnode25940", 00:12:47.067 "min_cntlid": 0, 00:12:47.067 "method": "nvmf_create_subsystem", 00:12:47.067 "req_id": 1 00:12:47.067 } 00:12:47.067 Got JSON-RPC error response 00:12:47.067 response: 00:12:47.067 { 00:12:47.067 "code": -32602, 00:12:47.067 "message": "Invalid cntlid range [0-65519]" 00:12:47.067 }' 00:12:47.067 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:47.067 { 00:12:47.067 "nqn": "nqn.2016-06.io.spdk:cnode25940", 00:12:47.067 "min_cntlid": 0, 00:12:47.067 "method": "nvmf_create_subsystem", 00:12:47.067 "req_id": 1 00:12:47.067 } 00:12:47.067 Got JSON-RPC error response 00:12:47.067 response: 00:12:47.067 { 00:12:47.067 "code": -32602, 00:12:47.067 "message": "Invalid cntlid range [0-65519]" 00:12:47.067 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.067 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24917 -i 65520 00:12:47.067 [2024-11-19 11:24:00.788085] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24917: invalid cntlid range [65520-65519] 00:12:47.067 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:47.067 { 00:12:47.067 "nqn": "nqn.2016-06.io.spdk:cnode24917", 00:12:47.067 "min_cntlid": 65520, 00:12:47.067 "method": "nvmf_create_subsystem", 00:12:47.067 "req_id": 1 00:12:47.067 } 00:12:47.067 Got JSON-RPC error response 00:12:47.068 response: 00:12:47.068 { 00:12:47.068 "code": -32602, 00:12:47.068 "message": "Invalid cntlid range [65520-65519]" 00:12:47.068 }' 00:12:47.068 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:47.068 { 00:12:47.068 "nqn": "nqn.2016-06.io.spdk:cnode24917", 00:12:47.068 "min_cntlid": 65520, 00:12:47.068 "method": "nvmf_create_subsystem", 00:12:47.068 "req_id": 1 00:12:47.068 } 00:12:47.068 Got JSON-RPC error response 00:12:47.068 response: 00:12:47.068 { 00:12:47.068 "code": -32602, 00:12:47.068 "message": "Invalid cntlid range [65520-65519]" 00:12:47.068 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.068 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10566 -I 0 00:12:47.326 [2024-11-19 11:24:00.996815] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10566: invalid cntlid range [1-0] 00:12:47.326 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:47.326 { 00:12:47.326 "nqn": "nqn.2016-06.io.spdk:cnode10566", 00:12:47.326 "max_cntlid": 0, 00:12:47.326 "method": "nvmf_create_subsystem", 00:12:47.326 "req_id": 1 00:12:47.326 } 00:12:47.326 Got JSON-RPC error response 00:12:47.326 response: 00:12:47.326 { 00:12:47.326 "code": -32602, 00:12:47.326 "message": "Invalid cntlid range [1-0]" 00:12:47.326 }' 00:12:47.326 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:47.326 { 00:12:47.326 "nqn": "nqn.2016-06.io.spdk:cnode10566", 00:12:47.326 "max_cntlid": 0, 00:12:47.326 "method": "nvmf_create_subsystem", 00:12:47.326 "req_id": 1 00:12:47.326 } 00:12:47.326 Got JSON-RPC error response 00:12:47.326 response: 00:12:47.326 { 00:12:47.326 "code": -32602, 00:12:47.326 "message": "Invalid cntlid range [1-0]" 00:12:47.326 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.326 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25834 -I 65520 00:12:47.585 [2024-11-19 11:24:01.209533] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25834: invalid cntlid range [1-65520] 00:12:47.585 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:47.585 { 00:12:47.585 "nqn": "nqn.2016-06.io.spdk:cnode25834", 00:12:47.585 "max_cntlid": 65520, 00:12:47.585 "method": "nvmf_create_subsystem", 00:12:47.585 "req_id": 1 00:12:47.585 } 00:12:47.585 Got JSON-RPC error response 00:12:47.585 response: 00:12:47.585 { 00:12:47.585 "code": -32602, 00:12:47.585 "message": "Invalid cntlid range [1-65520]" 00:12:47.585 }' 00:12:47.585 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:47.585 { 00:12:47.585 "nqn": "nqn.2016-06.io.spdk:cnode25834", 00:12:47.585 "max_cntlid": 65520, 00:12:47.585 "method": "nvmf_create_subsystem", 00:12:47.585 "req_id": 1 00:12:47.585 } 00:12:47.585 Got JSON-RPC error response 00:12:47.585 response: 00:12:47.585 { 00:12:47.585 "code": -32602, 00:12:47.585 "message": "Invalid cntlid range [1-65520]" 00:12:47.585 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.585 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8397 -i 6 -I 5 00:12:47.843 [2024-11-19 11:24:01.422271] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8397: invalid cntlid range [6-5] 00:12:47.843 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:47.843 { 00:12:47.843 "nqn": "nqn.2016-06.io.spdk:cnode8397", 00:12:47.843 "min_cntlid": 6, 00:12:47.843 "max_cntlid": 5, 00:12:47.843 "method": "nvmf_create_subsystem", 00:12:47.843 "req_id": 1 00:12:47.843 } 00:12:47.843 Got JSON-RPC error response 00:12:47.843 response: 00:12:47.843 { 00:12:47.843 "code": -32602, 00:12:47.843 "message": "Invalid cntlid range [6-5]" 00:12:47.843 }' 00:12:47.843 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:47.843 { 00:12:47.843 "nqn": "nqn.2016-06.io.spdk:cnode8397", 00:12:47.843 "min_cntlid": 6, 00:12:47.843 "max_cntlid": 5, 00:12:47.843 "method": "nvmf_create_subsystem", 00:12:47.843 "req_id": 1 00:12:47.843 } 00:12:47.843 Got JSON-RPC error response 00:12:47.843 response: 00:12:47.843 { 00:12:47.843 "code": -32602, 00:12:47.843 "message": "Invalid cntlid range [6-5]" 00:12:47.843 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.843 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:47.843 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:47.843 { 00:12:47.843 "name": "foobar", 00:12:47.843 "method": "nvmf_delete_target", 00:12:47.843 "req_id": 1 00:12:47.843 } 00:12:47.843 Got JSON-RPC error response 00:12:47.843 response: 00:12:47.843 { 00:12:47.843 "code": -32602, 00:12:47.843 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:47.843 }' 00:12:47.843 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:47.843 { 00:12:47.843 "name": "foobar", 00:12:47.843 "method": "nvmf_delete_target", 00:12:47.843 "req_id": 1 00:12:47.843 } 00:12:47.843 Got JSON-RPC error response 00:12:47.843 response: 00:12:47.843 { 00:12:47.843 "code": -32602, 00:12:47.843 "message": "The specified target doesn't exist, cannot delete it." 00:12:47.843 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:47.843 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:47.843 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:47.843 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:47.843 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:47.843 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:47.844 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:47.844 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.844 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:47.844 rmmod nvme_tcp 00:12:47.844 rmmod nvme_fabrics 00:12:47.844 rmmod nvme_keyring 00:12:48.102 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.102 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:48.102 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:48.102 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2201773 ']' 00:12:48.102 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2201773 00:12:48.102 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2201773 ']' 00:12:48.102 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2201773 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2201773 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2201773' 00:12:48.103 killing process with pid 2201773 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2201773 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2201773 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.103 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.640 11:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.640 00:12:50.640 real 0m12.696s 00:12:50.640 user 0m21.561s 00:12:50.640 sys 0m5.432s 00:12:50.640 11:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.640 11:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:50.640 ************************************ 00:12:50.640 END TEST nvmf_invalid 00:12:50.640 ************************************ 00:12:50.640 11:24:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:50.640 11:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.640 11:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.640 11:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.640 ************************************ 00:12:50.640 START TEST nvmf_connect_stress 00:12:50.640 ************************************ 00:12:50.640 11:24:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:50.640 * Looking for test storage... 00:12:50.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:50.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.640 --rc genhtml_branch_coverage=1 00:12:50.640 --rc genhtml_function_coverage=1 00:12:50.640 --rc genhtml_legend=1 00:12:50.640 --rc geninfo_all_blocks=1 00:12:50.640 --rc geninfo_unexecuted_blocks=1 00:12:50.640 00:12:50.640 ' 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:50.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.640 --rc genhtml_branch_coverage=1 00:12:50.640 --rc genhtml_function_coverage=1 00:12:50.640 --rc genhtml_legend=1 00:12:50.640 --rc geninfo_all_blocks=1 00:12:50.640 --rc geninfo_unexecuted_blocks=1 00:12:50.640 00:12:50.640 ' 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:50.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.640 --rc genhtml_branch_coverage=1 00:12:50.640 --rc genhtml_function_coverage=1 00:12:50.640 --rc genhtml_legend=1 00:12:50.640 --rc geninfo_all_blocks=1 00:12:50.640 --rc geninfo_unexecuted_blocks=1 00:12:50.640 00:12:50.640 ' 00:12:50.640 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:50.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.640 --rc genhtml_branch_coverage=1 00:12:50.640 --rc genhtml_function_coverage=1 00:12:50.640 --rc genhtml_legend=1 00:12:50.640 --rc geninfo_all_blocks=1 00:12:50.640 --rc geninfo_unexecuted_blocks=1 00:12:50.640 00:12:50.640 ' 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.641 11:24:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.062 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:56.062 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:56.063 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:56.063 Found net devices under 0000:86:00.0: cvl_0_0 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:56.063 Found net devices under 0000:86:00.1: cvl_0_1 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.063 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.323 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.323 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.323 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:56.323 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:56.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:12:56.323 00:12:56.323 --- 10.0.0.2 ping statistics --- 00:12:56.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.323 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:12:56.323 00:12:56.323 --- 10.0.0.1 ping statistics --- 00:12:56.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.323 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.323 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.582 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2206708 00:12:56.582 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2206708 00:12:56.582 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:56.582 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2206708 ']' 00:12:56.582 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.582 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.582 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.582 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.582 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.582 [2024-11-19 11:24:10.156478] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:56.582 [2024-11-19 11:24:10.156527] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.582 [2024-11-19 11:24:10.236965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:56.582 [2024-11-19 11:24:10.279906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.582 [2024-11-19 11:24:10.279945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.582 [2024-11-19 11:24:10.279956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.582 [2024-11-19 11:24:10.279962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.582 [2024-11-19 11:24:10.279967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.582 [2024-11-19 11:24:10.281455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.582 [2024-11-19 11:24:10.281561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.582 [2024-11-19 11:24:10.281562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.842 [2024-11-19 11:24:10.417984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.842 [2024-11-19 11:24:10.438201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.842 NULL1 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2206814 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.842 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.102 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.102 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:12:57.102 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.102 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.102 11:24:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.668 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.668 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:12:57.668 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.668 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.668 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.925 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.925 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:12:57.925 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.925 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.925 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.183 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.183 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:12:58.183 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.183 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.183 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.441 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.441 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:12:58.441 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.441 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.441 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.006 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.006 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:12:59.006 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.006 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.006 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.264 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.264 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:12:59.264 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.264 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.264 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.521 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.521 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:12:59.521 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.521 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.521 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.779 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.779 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:12:59.779 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.779 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.779 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.037 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.037 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:00.037 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.037 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.037 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.602 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.602 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:00.602 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.602 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.602 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.860 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.860 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:00.860 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.860 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.860 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.117 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.117 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:01.117 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.117 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.117 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.375 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.375 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:01.375 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.375 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.375 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.941 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.941 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:01.941 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.941 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.941 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.198 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.198 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:02.198 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.198 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.198 11:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.457 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.457 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:02.457 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.457 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.457 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.714 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.714 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:02.714 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.714 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.714 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.972 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.972 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:02.972 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.972 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.972 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.537 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.537 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:03.537 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.537 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.537 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.794 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.794 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:03.794 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.794 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.794 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.052 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.052 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:04.052 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.052 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.052 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.310 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.310 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:04.310 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.310 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.310 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.876 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.876 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:04.876 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.876 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.876 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.133 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.133 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:05.133 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.133 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.133 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.391 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.391 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:05.391 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.391 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.391 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.649 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.649 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:05.649 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.649 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.649 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.906 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.906 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:05.906 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.906 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.906 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.472 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.472 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:06.472 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.472 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.472 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.730 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.730 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:06.730 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.730 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.730 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.988 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2206814 00:13:06.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2206814) - No such process 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2206814 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.988 rmmod nvme_tcp 00:13:06.988 rmmod nvme_fabrics 00:13:06.988 rmmod nvme_keyring 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2206708 ']' 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2206708 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2206708 ']' 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2206708 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.988 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2206708 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2206708' 00:13:07.247 killing process with pid 2206708 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2206708 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2206708 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.247 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:09.784 00:13:09.784 real 0m19.014s 00:13:09.784 user 0m39.457s 00:13:09.784 sys 0m8.491s 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.784 ************************************ 00:13:09.784 END TEST nvmf_connect_stress 00:13:09.784 ************************************ 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.784 ************************************ 00:13:09.784 START TEST nvmf_fused_ordering 00:13:09.784 ************************************ 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:09.784 * Looking for test storage... 00:13:09.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.784 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:09.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.785 --rc genhtml_branch_coverage=1 00:13:09.785 --rc genhtml_function_coverage=1 00:13:09.785 --rc genhtml_legend=1 00:13:09.785 --rc geninfo_all_blocks=1 00:13:09.785 --rc geninfo_unexecuted_blocks=1 00:13:09.785 00:13:09.785 ' 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:09.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.785 --rc genhtml_branch_coverage=1 00:13:09.785 --rc genhtml_function_coverage=1 00:13:09.785 --rc genhtml_legend=1 00:13:09.785 --rc geninfo_all_blocks=1 00:13:09.785 --rc geninfo_unexecuted_blocks=1 00:13:09.785 00:13:09.785 ' 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:09.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.785 --rc genhtml_branch_coverage=1 00:13:09.785 --rc genhtml_function_coverage=1 00:13:09.785 --rc genhtml_legend=1 00:13:09.785 --rc geninfo_all_blocks=1 00:13:09.785 --rc geninfo_unexecuted_blocks=1 00:13:09.785 00:13:09.785 ' 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:09.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.785 --rc genhtml_branch_coverage=1 00:13:09.785 --rc genhtml_function_coverage=1 00:13:09.785 --rc genhtml_legend=1 00:13:09.785 --rc geninfo_all_blocks=1 00:13:09.785 --rc geninfo_unexecuted_blocks=1 00:13:09.785 00:13:09.785 ' 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.785 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:09.786 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:09.786 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:09.786 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.786 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.786 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.786 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:09.786 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:09.786 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:09.786 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:16.356 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:16.356 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:16.356 Found net devices under 0000:86:00.0: cvl_0_0 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:16.356 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:16.357 Found net devices under 0000:86:00.1: cvl_0_1 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.357 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:16.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:13:16.357 00:13:16.357 --- 10.0.0.2 ping statistics --- 00:13:16.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.357 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:13:16.357 00:13:16.357 --- 10.0.0.1 ping statistics --- 00:13:16.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.357 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2211970 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2211970 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2211970 ']' 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.357 [2024-11-19 11:24:29.304827] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:16.357 [2024-11-19 11:24:29.304881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.357 [2024-11-19 11:24:29.370508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.357 [2024-11-19 11:24:29.413140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.357 [2024-11-19 11:24:29.413175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.357 [2024-11-19 11:24:29.413183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.357 [2024-11-19 11:24:29.413190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.357 [2024-11-19 11:24:29.413195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.357 [2024-11-19 11:24:29.413749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.357 [2024-11-19 11:24:29.557548] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.357 [2024-11-19 11:24:29.577745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.357 NULL1 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.357 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:16.358 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.358 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.358 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.358 11:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:16.358 [2024-11-19 11:24:29.634528] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:16.358 [2024-11-19 11:24:29.634560] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2212003 ] 00:13:16.358 Attached to nqn.2016-06.io.spdk:cnode1 00:13:16.358 Namespace ID: 1 size: 1GB 00:13:16.358 fused_ordering(0) 00:13:16.358 fused_ordering(1) 00:13:16.358 fused_ordering(2) 00:13:16.358 fused_ordering(3) 00:13:16.358 fused_ordering(4) 00:13:16.358 fused_ordering(5) 00:13:16.358 fused_ordering(6) 00:13:16.358 fused_ordering(7) 00:13:16.358 fused_ordering(8) 00:13:16.358 fused_ordering(9) 00:13:16.358 fused_ordering(10) 00:13:16.358 fused_ordering(11) 00:13:16.358 fused_ordering(12) 00:13:16.358 fused_ordering(13) 00:13:16.358 fused_ordering(14) 00:13:16.358 fused_ordering(15) 00:13:16.358 fused_ordering(16) 00:13:16.358 fused_ordering(17) 00:13:16.358 fused_ordering(18) 00:13:16.358 fused_ordering(19) 00:13:16.358 fused_ordering(20) 00:13:16.358 fused_ordering(21) 00:13:16.358 fused_ordering(22) 00:13:16.358 fused_ordering(23) 00:13:16.358 fused_ordering(24) 00:13:16.358 fused_ordering(25) 00:13:16.358 fused_ordering(26) 00:13:16.358 fused_ordering(27) 00:13:16.358 fused_ordering(28) 00:13:16.358 fused_ordering(29) 00:13:16.358 fused_ordering(30) 00:13:16.358 fused_ordering(31) 00:13:16.358 fused_ordering(32) 00:13:16.358 fused_ordering(33) 00:13:16.358 fused_ordering(34) 00:13:16.358 fused_ordering(35) 00:13:16.358 fused_ordering(36) 00:13:16.358 fused_ordering(37) 00:13:16.358 fused_ordering(38) 00:13:16.358 fused_ordering(39) 00:13:16.358 fused_ordering(40) 00:13:16.358 fused_ordering(41) 00:13:16.358 fused_ordering(42) 00:13:16.358 fused_ordering(43) 00:13:16.358 fused_ordering(44) 00:13:16.358 fused_ordering(45) 00:13:16.358 fused_ordering(46) 00:13:16.358 fused_ordering(47) 00:13:16.358 fused_ordering(48) 00:13:16.358 fused_ordering(49) 00:13:16.358 fused_ordering(50) 00:13:16.358 fused_ordering(51) 00:13:16.358 fused_ordering(52) 00:13:16.358 fused_ordering(53) 00:13:16.358 fused_ordering(54) 00:13:16.358 fused_ordering(55) 00:13:16.358 fused_ordering(56) 00:13:16.358 fused_ordering(57) 00:13:16.358 fused_ordering(58) 00:13:16.358 fused_ordering(59) 00:13:16.358 fused_ordering(60) 00:13:16.358 fused_ordering(61) 00:13:16.358 fused_ordering(62) 00:13:16.358 fused_ordering(63) 00:13:16.358 fused_ordering(64) 00:13:16.358 fused_ordering(65) 00:13:16.358 fused_ordering(66) 00:13:16.358 fused_ordering(67) 00:13:16.358 fused_ordering(68) 00:13:16.358 fused_ordering(69) 00:13:16.358 fused_ordering(70) 00:13:16.358 fused_ordering(71) 00:13:16.358 fused_ordering(72) 00:13:16.358 fused_ordering(73) 00:13:16.358 fused_ordering(74) 00:13:16.358 fused_ordering(75) 00:13:16.358 fused_ordering(76) 00:13:16.358 fused_ordering(77) 00:13:16.358 fused_ordering(78) 00:13:16.358 fused_ordering(79) 00:13:16.358 fused_ordering(80) 00:13:16.358 fused_ordering(81) 00:13:16.358 fused_ordering(82) 00:13:16.358 fused_ordering(83) 00:13:16.358 fused_ordering(84) 00:13:16.358 fused_ordering(85) 00:13:16.358 fused_ordering(86) 00:13:16.358 fused_ordering(87) 00:13:16.358 fused_ordering(88) 00:13:16.358 fused_ordering(89) 00:13:16.358 fused_ordering(90) 00:13:16.358 fused_ordering(91) 00:13:16.358 fused_ordering(92) 00:13:16.358 fused_ordering(93) 00:13:16.358 fused_ordering(94) 00:13:16.358 fused_ordering(95) 00:13:16.358 fused_ordering(96) 00:13:16.358 fused_ordering(97) 00:13:16.358 fused_ordering(98) 00:13:16.358 fused_ordering(99) 00:13:16.358 fused_ordering(100) 00:13:16.358 fused_ordering(101) 00:13:16.358 fused_ordering(102) 00:13:16.358 fused_ordering(103) 00:13:16.358 fused_ordering(104) 00:13:16.358 fused_ordering(105) 00:13:16.358 fused_ordering(106) 00:13:16.358 fused_ordering(107) 00:13:16.358 fused_ordering(108) 00:13:16.358 fused_ordering(109) 00:13:16.358 fused_ordering(110) 00:13:16.358 fused_ordering(111) 00:13:16.358 fused_ordering(112) 00:13:16.358 fused_ordering(113) 00:13:16.358 fused_ordering(114) 00:13:16.358 fused_ordering(115) 00:13:16.358 fused_ordering(116) 00:13:16.358 fused_ordering(117) 00:13:16.358 fused_ordering(118) 00:13:16.358 fused_ordering(119) 00:13:16.358 fused_ordering(120) 00:13:16.358 fused_ordering(121) 00:13:16.358 fused_ordering(122) 00:13:16.358 fused_ordering(123) 00:13:16.358 fused_ordering(124) 00:13:16.358 fused_ordering(125) 00:13:16.358 fused_ordering(126) 00:13:16.358 fused_ordering(127) 00:13:16.358 fused_ordering(128) 00:13:16.358 fused_ordering(129) 00:13:16.358 fused_ordering(130) 00:13:16.358 fused_ordering(131) 00:13:16.358 fused_ordering(132) 00:13:16.358 fused_ordering(133) 00:13:16.358 fused_ordering(134) 00:13:16.358 fused_ordering(135) 00:13:16.358 fused_ordering(136) 00:13:16.358 fused_ordering(137) 00:13:16.358 fused_ordering(138) 00:13:16.358 fused_ordering(139) 00:13:16.358 fused_ordering(140) 00:13:16.358 fused_ordering(141) 00:13:16.358 fused_ordering(142) 00:13:16.358 fused_ordering(143) 00:13:16.358 fused_ordering(144) 00:13:16.358 fused_ordering(145) 00:13:16.358 fused_ordering(146) 00:13:16.358 fused_ordering(147) 00:13:16.358 fused_ordering(148) 00:13:16.358 fused_ordering(149) 00:13:16.358 fused_ordering(150) 00:13:16.358 fused_ordering(151) 00:13:16.358 fused_ordering(152) 00:13:16.358 fused_ordering(153) 00:13:16.358 fused_ordering(154) 00:13:16.358 fused_ordering(155) 00:13:16.358 fused_ordering(156) 00:13:16.358 fused_ordering(157) 00:13:16.358 fused_ordering(158) 00:13:16.358 fused_ordering(159) 00:13:16.358 fused_ordering(160) 00:13:16.358 fused_ordering(161) 00:13:16.358 fused_ordering(162) 00:13:16.358 fused_ordering(163) 00:13:16.358 fused_ordering(164) 00:13:16.358 fused_ordering(165) 00:13:16.358 fused_ordering(166) 00:13:16.358 fused_ordering(167) 00:13:16.358 fused_ordering(168) 00:13:16.358 fused_ordering(169) 00:13:16.358 fused_ordering(170) 00:13:16.358 fused_ordering(171) 00:13:16.358 fused_ordering(172) 00:13:16.358 fused_ordering(173) 00:13:16.358 fused_ordering(174) 00:13:16.358 fused_ordering(175) 00:13:16.358 fused_ordering(176) 00:13:16.358 fused_ordering(177) 00:13:16.358 fused_ordering(178) 00:13:16.358 fused_ordering(179) 00:13:16.358 fused_ordering(180) 00:13:16.358 fused_ordering(181) 00:13:16.358 fused_ordering(182) 00:13:16.358 fused_ordering(183) 00:13:16.358 fused_ordering(184) 00:13:16.358 fused_ordering(185) 00:13:16.358 fused_ordering(186) 00:13:16.358 fused_ordering(187) 00:13:16.358 fused_ordering(188) 00:13:16.358 fused_ordering(189) 00:13:16.358 fused_ordering(190) 00:13:16.358 fused_ordering(191) 00:13:16.358 fused_ordering(192) 00:13:16.358 fused_ordering(193) 00:13:16.358 fused_ordering(194) 00:13:16.358 fused_ordering(195) 00:13:16.358 fused_ordering(196) 00:13:16.358 fused_ordering(197) 00:13:16.358 fused_ordering(198) 00:13:16.358 fused_ordering(199) 00:13:16.358 fused_ordering(200) 00:13:16.358 fused_ordering(201) 00:13:16.358 fused_ordering(202) 00:13:16.358 fused_ordering(203) 00:13:16.358 fused_ordering(204) 00:13:16.358 fused_ordering(205) 00:13:16.617 fused_ordering(206) 00:13:16.617 fused_ordering(207) 00:13:16.617 fused_ordering(208) 00:13:16.617 fused_ordering(209) 00:13:16.617 fused_ordering(210) 00:13:16.617 fused_ordering(211) 00:13:16.617 fused_ordering(212) 00:13:16.617 fused_ordering(213) 00:13:16.617 fused_ordering(214) 00:13:16.617 fused_ordering(215) 00:13:16.617 fused_ordering(216) 00:13:16.617 fused_ordering(217) 00:13:16.617 fused_ordering(218) 00:13:16.617 fused_ordering(219) 00:13:16.617 fused_ordering(220) 00:13:16.617 fused_ordering(221) 00:13:16.617 fused_ordering(222) 00:13:16.617 fused_ordering(223) 00:13:16.617 fused_ordering(224) 00:13:16.617 fused_ordering(225) 00:13:16.617 fused_ordering(226) 00:13:16.618 fused_ordering(227) 00:13:16.618 fused_ordering(228) 00:13:16.618 fused_ordering(229) 00:13:16.618 fused_ordering(230) 00:13:16.618 fused_ordering(231) 00:13:16.618 fused_ordering(232) 00:13:16.618 fused_ordering(233) 00:13:16.618 fused_ordering(234) 00:13:16.618 fused_ordering(235) 00:13:16.618 fused_ordering(236) 00:13:16.618 fused_ordering(237) 00:13:16.618 fused_ordering(238) 00:13:16.618 fused_ordering(239) 00:13:16.618 fused_ordering(240) 00:13:16.618 fused_ordering(241) 00:13:16.618 fused_ordering(242) 00:13:16.618 fused_ordering(243) 00:13:16.618 fused_ordering(244) 00:13:16.618 fused_ordering(245) 00:13:16.618 fused_ordering(246) 00:13:16.618 fused_ordering(247) 00:13:16.618 fused_ordering(248) 00:13:16.618 fused_ordering(249) 00:13:16.618 fused_ordering(250) 00:13:16.618 fused_ordering(251) 00:13:16.618 fused_ordering(252) 00:13:16.618 fused_ordering(253) 00:13:16.618 fused_ordering(254) 00:13:16.618 fused_ordering(255) 00:13:16.618 fused_ordering(256) 00:13:16.618 fused_ordering(257) 00:13:16.618 fused_ordering(258) 00:13:16.618 fused_ordering(259) 00:13:16.618 fused_ordering(260) 00:13:16.618 fused_ordering(261) 00:13:16.618 fused_ordering(262) 00:13:16.618 fused_ordering(263) 00:13:16.618 fused_ordering(264) 00:13:16.618 fused_ordering(265) 00:13:16.618 fused_ordering(266) 00:13:16.618 fused_ordering(267) 00:13:16.618 fused_ordering(268) 00:13:16.618 fused_ordering(269) 00:13:16.618 fused_ordering(270) 00:13:16.618 fused_ordering(271) 00:13:16.618 fused_ordering(272) 00:13:16.618 fused_ordering(273) 00:13:16.618 fused_ordering(274) 00:13:16.618 fused_ordering(275) 00:13:16.618 fused_ordering(276) 00:13:16.618 fused_ordering(277) 00:13:16.618 fused_ordering(278) 00:13:16.618 fused_ordering(279) 00:13:16.618 fused_ordering(280) 00:13:16.618 fused_ordering(281) 00:13:16.618 fused_ordering(282) 00:13:16.618 fused_ordering(283) 00:13:16.618 fused_ordering(284) 00:13:16.618 fused_ordering(285) 00:13:16.618 fused_ordering(286) 00:13:16.618 fused_ordering(287) 00:13:16.618 fused_ordering(288) 00:13:16.618 fused_ordering(289) 00:13:16.618 fused_ordering(290) 00:13:16.618 fused_ordering(291) 00:13:16.618 fused_ordering(292) 00:13:16.618 fused_ordering(293) 00:13:16.618 fused_ordering(294) 00:13:16.618 fused_ordering(295) 00:13:16.618 fused_ordering(296) 00:13:16.618 fused_ordering(297) 00:13:16.618 fused_ordering(298) 00:13:16.618 fused_ordering(299) 00:13:16.618 fused_ordering(300) 00:13:16.618 fused_ordering(301) 00:13:16.618 fused_ordering(302) 00:13:16.618 fused_ordering(303) 00:13:16.618 fused_ordering(304) 00:13:16.618 fused_ordering(305) 00:13:16.618 fused_ordering(306) 00:13:16.618 fused_ordering(307) 00:13:16.618 fused_ordering(308) 00:13:16.618 fused_ordering(309) 00:13:16.618 fused_ordering(310) 00:13:16.618 fused_ordering(311) 00:13:16.618 fused_ordering(312) 00:13:16.618 fused_ordering(313) 00:13:16.618 fused_ordering(314) 00:13:16.618 fused_ordering(315) 00:13:16.618 fused_ordering(316) 00:13:16.618 fused_ordering(317) 00:13:16.618 fused_ordering(318) 00:13:16.618 fused_ordering(319) 00:13:16.618 fused_ordering(320) 00:13:16.618 fused_ordering(321) 00:13:16.618 fused_ordering(322) 00:13:16.618 fused_ordering(323) 00:13:16.618 fused_ordering(324) 00:13:16.618 fused_ordering(325) 00:13:16.618 fused_ordering(326) 00:13:16.618 fused_ordering(327) 00:13:16.618 fused_ordering(328) 00:13:16.618 fused_ordering(329) 00:13:16.618 fused_ordering(330) 00:13:16.618 fused_ordering(331) 00:13:16.618 fused_ordering(332) 00:13:16.618 fused_ordering(333) 00:13:16.618 fused_ordering(334) 00:13:16.618 fused_ordering(335) 00:13:16.618 fused_ordering(336) 00:13:16.618 fused_ordering(337) 00:13:16.618 fused_ordering(338) 00:13:16.618 fused_ordering(339) 00:13:16.618 fused_ordering(340) 00:13:16.618 fused_ordering(341) 00:13:16.618 fused_ordering(342) 00:13:16.618 fused_ordering(343) 00:13:16.618 fused_ordering(344) 00:13:16.618 fused_ordering(345) 00:13:16.618 fused_ordering(346) 00:13:16.618 fused_ordering(347) 00:13:16.618 fused_ordering(348) 00:13:16.618 fused_ordering(349) 00:13:16.618 fused_ordering(350) 00:13:16.618 fused_ordering(351) 00:13:16.618 fused_ordering(352) 00:13:16.618 fused_ordering(353) 00:13:16.618 fused_ordering(354) 00:13:16.618 fused_ordering(355) 00:13:16.618 fused_ordering(356) 00:13:16.618 fused_ordering(357) 00:13:16.618 fused_ordering(358) 00:13:16.618 fused_ordering(359) 00:13:16.618 fused_ordering(360) 00:13:16.618 fused_ordering(361) 00:13:16.618 fused_ordering(362) 00:13:16.618 fused_ordering(363) 00:13:16.618 fused_ordering(364) 00:13:16.618 fused_ordering(365) 00:13:16.618 fused_ordering(366) 00:13:16.618 fused_ordering(367) 00:13:16.618 fused_ordering(368) 00:13:16.618 fused_ordering(369) 00:13:16.618 fused_ordering(370) 00:13:16.618 fused_ordering(371) 00:13:16.618 fused_ordering(372) 00:13:16.618 fused_ordering(373) 00:13:16.618 fused_ordering(374) 00:13:16.618 fused_ordering(375) 00:13:16.618 fused_ordering(376) 00:13:16.618 fused_ordering(377) 00:13:16.618 fused_ordering(378) 00:13:16.618 fused_ordering(379) 00:13:16.618 fused_ordering(380) 00:13:16.618 fused_ordering(381) 00:13:16.618 fused_ordering(382) 00:13:16.618 fused_ordering(383) 00:13:16.618 fused_ordering(384) 00:13:16.618 fused_ordering(385) 00:13:16.618 fused_ordering(386) 00:13:16.618 fused_ordering(387) 00:13:16.618 fused_ordering(388) 00:13:16.618 fused_ordering(389) 00:13:16.618 fused_ordering(390) 00:13:16.618 fused_ordering(391) 00:13:16.618 fused_ordering(392) 00:13:16.618 fused_ordering(393) 00:13:16.618 fused_ordering(394) 00:13:16.618 fused_ordering(395) 00:13:16.618 fused_ordering(396) 00:13:16.618 fused_ordering(397) 00:13:16.618 fused_ordering(398) 00:13:16.618 fused_ordering(399) 00:13:16.618 fused_ordering(400) 00:13:16.618 fused_ordering(401) 00:13:16.618 fused_ordering(402) 00:13:16.618 fused_ordering(403) 00:13:16.618 fused_ordering(404) 00:13:16.618 fused_ordering(405) 00:13:16.618 fused_ordering(406) 00:13:16.618 fused_ordering(407) 00:13:16.618 fused_ordering(408) 00:13:16.618 fused_ordering(409) 00:13:16.618 fused_ordering(410) 00:13:16.878 fused_ordering(411) 00:13:16.878 fused_ordering(412) 00:13:16.878 fused_ordering(413) 00:13:16.878 fused_ordering(414) 00:13:16.878 fused_ordering(415) 00:13:16.878 fused_ordering(416) 00:13:16.878 fused_ordering(417) 00:13:16.878 fused_ordering(418) 00:13:16.878 fused_ordering(419) 00:13:16.878 fused_ordering(420) 00:13:16.878 fused_ordering(421) 00:13:16.878 fused_ordering(422) 00:13:16.878 fused_ordering(423) 00:13:16.878 fused_ordering(424) 00:13:16.878 fused_ordering(425) 00:13:16.878 fused_ordering(426) 00:13:16.878 fused_ordering(427) 00:13:16.878 fused_ordering(428) 00:13:16.878 fused_ordering(429) 00:13:16.878 fused_ordering(430) 00:13:16.878 fused_ordering(431) 00:13:16.878 fused_ordering(432) 00:13:16.878 fused_ordering(433) 00:13:16.878 fused_ordering(434) 00:13:16.878 fused_ordering(435) 00:13:16.878 fused_ordering(436) 00:13:16.878 fused_ordering(437) 00:13:16.878 fused_ordering(438) 00:13:16.878 fused_ordering(439) 00:13:16.878 fused_ordering(440) 00:13:16.878 fused_ordering(441) 00:13:16.878 fused_ordering(442) 00:13:16.878 fused_ordering(443) 00:13:16.878 fused_ordering(444) 00:13:16.878 fused_ordering(445) 00:13:16.878 fused_ordering(446) 00:13:16.878 fused_ordering(447) 00:13:16.878 fused_ordering(448) 00:13:16.878 fused_ordering(449) 00:13:16.878 fused_ordering(450) 00:13:16.878 fused_ordering(451) 00:13:16.878 fused_ordering(452) 00:13:16.878 fused_ordering(453) 00:13:16.878 fused_ordering(454) 00:13:16.878 fused_ordering(455) 00:13:16.878 fused_ordering(456) 00:13:16.878 fused_ordering(457) 00:13:16.878 fused_ordering(458) 00:13:16.878 fused_ordering(459) 00:13:16.878 fused_ordering(460) 00:13:16.878 fused_ordering(461) 00:13:16.878 fused_ordering(462) 00:13:16.878 fused_ordering(463) 00:13:16.878 fused_ordering(464) 00:13:16.878 fused_ordering(465) 00:13:16.878 fused_ordering(466) 00:13:16.878 fused_ordering(467) 00:13:16.878 fused_ordering(468) 00:13:16.878 fused_ordering(469) 00:13:16.878 fused_ordering(470) 00:13:16.878 fused_ordering(471) 00:13:16.878 fused_ordering(472) 00:13:16.878 fused_ordering(473) 00:13:16.878 fused_ordering(474) 00:13:16.878 fused_ordering(475) 00:13:16.878 fused_ordering(476) 00:13:16.878 fused_ordering(477) 00:13:16.878 fused_ordering(478) 00:13:16.878 fused_ordering(479) 00:13:16.878 fused_ordering(480) 00:13:16.878 fused_ordering(481) 00:13:16.878 fused_ordering(482) 00:13:16.878 fused_ordering(483) 00:13:16.878 fused_ordering(484) 00:13:16.878 fused_ordering(485) 00:13:16.878 fused_ordering(486) 00:13:16.878 fused_ordering(487) 00:13:16.878 fused_ordering(488) 00:13:16.878 fused_ordering(489) 00:13:16.878 fused_ordering(490) 00:13:16.878 fused_ordering(491) 00:13:16.878 fused_ordering(492) 00:13:16.878 fused_ordering(493) 00:13:16.878 fused_ordering(494) 00:13:16.878 fused_ordering(495) 00:13:16.878 fused_ordering(496) 00:13:16.878 fused_ordering(497) 00:13:16.878 fused_ordering(498) 00:13:16.878 fused_ordering(499) 00:13:16.878 fused_ordering(500) 00:13:16.878 fused_ordering(501) 00:13:16.878 fused_ordering(502) 00:13:16.878 fused_ordering(503) 00:13:16.878 fused_ordering(504) 00:13:16.878 fused_ordering(505) 00:13:16.878 fused_ordering(506) 00:13:16.878 fused_ordering(507) 00:13:16.878 fused_ordering(508) 00:13:16.878 fused_ordering(509) 00:13:16.878 fused_ordering(510) 00:13:16.878 fused_ordering(511) 00:13:16.878 fused_ordering(512) 00:13:16.878 fused_ordering(513) 00:13:16.878 fused_ordering(514) 00:13:16.878 fused_ordering(515) 00:13:16.878 fused_ordering(516) 00:13:16.878 fused_ordering(517) 00:13:16.878 fused_ordering(518) 00:13:16.878 fused_ordering(519) 00:13:16.878 fused_ordering(520) 00:13:16.878 fused_ordering(521) 00:13:16.878 fused_ordering(522) 00:13:16.878 fused_ordering(523) 00:13:16.878 fused_ordering(524) 00:13:16.878 fused_ordering(525) 00:13:16.878 fused_ordering(526) 00:13:16.878 fused_ordering(527) 00:13:16.878 fused_ordering(528) 00:13:16.878 fused_ordering(529) 00:13:16.878 fused_ordering(530) 00:13:16.878 fused_ordering(531) 00:13:16.878 fused_ordering(532) 00:13:16.878 fused_ordering(533) 00:13:16.878 fused_ordering(534) 00:13:16.878 fused_ordering(535) 00:13:16.878 fused_ordering(536) 00:13:16.878 fused_ordering(537) 00:13:16.878 fused_ordering(538) 00:13:16.878 fused_ordering(539) 00:13:16.878 fused_ordering(540) 00:13:16.878 fused_ordering(541) 00:13:16.878 fused_ordering(542) 00:13:16.878 fused_ordering(543) 00:13:16.878 fused_ordering(544) 00:13:16.878 fused_ordering(545) 00:13:16.878 fused_ordering(546) 00:13:16.878 fused_ordering(547) 00:13:16.878 fused_ordering(548) 00:13:16.878 fused_ordering(549) 00:13:16.878 fused_ordering(550) 00:13:16.878 fused_ordering(551) 00:13:16.878 fused_ordering(552) 00:13:16.878 fused_ordering(553) 00:13:16.878 fused_ordering(554) 00:13:16.878 fused_ordering(555) 00:13:16.878 fused_ordering(556) 00:13:16.878 fused_ordering(557) 00:13:16.878 fused_ordering(558) 00:13:16.879 fused_ordering(559) 00:13:16.879 fused_ordering(560) 00:13:16.879 fused_ordering(561) 00:13:16.879 fused_ordering(562) 00:13:16.879 fused_ordering(563) 00:13:16.879 fused_ordering(564) 00:13:16.879 fused_ordering(565) 00:13:16.879 fused_ordering(566) 00:13:16.879 fused_ordering(567) 00:13:16.879 fused_ordering(568) 00:13:16.879 fused_ordering(569) 00:13:16.879 fused_ordering(570) 00:13:16.879 fused_ordering(571) 00:13:16.879 fused_ordering(572) 00:13:16.879 fused_ordering(573) 00:13:16.879 fused_ordering(574) 00:13:16.879 fused_ordering(575) 00:13:16.879 fused_ordering(576) 00:13:16.879 fused_ordering(577) 00:13:16.879 fused_ordering(578) 00:13:16.879 fused_ordering(579) 00:13:16.879 fused_ordering(580) 00:13:16.879 fused_ordering(581) 00:13:16.879 fused_ordering(582) 00:13:16.879 fused_ordering(583) 00:13:16.879 fused_ordering(584) 00:13:16.879 fused_ordering(585) 00:13:16.879 fused_ordering(586) 00:13:16.879 fused_ordering(587) 00:13:16.879 fused_ordering(588) 00:13:16.879 fused_ordering(589) 00:13:16.879 fused_ordering(590) 00:13:16.879 fused_ordering(591) 00:13:16.879 fused_ordering(592) 00:13:16.879 fused_ordering(593) 00:13:16.879 fused_ordering(594) 00:13:16.879 fused_ordering(595) 00:13:16.879 fused_ordering(596) 00:13:16.879 fused_ordering(597) 00:13:16.879 fused_ordering(598) 00:13:16.879 fused_ordering(599) 00:13:16.879 fused_ordering(600) 00:13:16.879 fused_ordering(601) 00:13:16.879 fused_ordering(602) 00:13:16.879 fused_ordering(603) 00:13:16.879 fused_ordering(604) 00:13:16.879 fused_ordering(605) 00:13:16.879 fused_ordering(606) 00:13:16.879 fused_ordering(607) 00:13:16.879 fused_ordering(608) 00:13:16.879 fused_ordering(609) 00:13:16.879 fused_ordering(610) 00:13:16.879 fused_ordering(611) 00:13:16.879 fused_ordering(612) 00:13:16.879 fused_ordering(613) 00:13:16.879 fused_ordering(614) 00:13:16.879 fused_ordering(615) 00:13:17.138 fused_ordering(616) 00:13:17.138 fused_ordering(617) 00:13:17.138 fused_ordering(618) 00:13:17.138 fused_ordering(619) 00:13:17.138 fused_ordering(620) 00:13:17.138 fused_ordering(621) 00:13:17.138 fused_ordering(622) 00:13:17.138 fused_ordering(623) 00:13:17.138 fused_ordering(624) 00:13:17.138 fused_ordering(625) 00:13:17.138 fused_ordering(626) 00:13:17.138 fused_ordering(627) 00:13:17.138 fused_ordering(628) 00:13:17.138 fused_ordering(629) 00:13:17.138 fused_ordering(630) 00:13:17.138 fused_ordering(631) 00:13:17.138 fused_ordering(632) 00:13:17.138 fused_ordering(633) 00:13:17.138 fused_ordering(634) 00:13:17.138 fused_ordering(635) 00:13:17.138 fused_ordering(636) 00:13:17.138 fused_ordering(637) 00:13:17.138 fused_ordering(638) 00:13:17.138 fused_ordering(639) 00:13:17.138 fused_ordering(640) 00:13:17.138 fused_ordering(641) 00:13:17.138 fused_ordering(642) 00:13:17.138 fused_ordering(643) 00:13:17.138 fused_ordering(644) 00:13:17.138 fused_ordering(645) 00:13:17.138 fused_ordering(646) 00:13:17.138 fused_ordering(647) 00:13:17.138 fused_ordering(648) 00:13:17.138 fused_ordering(649) 00:13:17.138 fused_ordering(650) 00:13:17.138 fused_ordering(651) 00:13:17.138 fused_ordering(652) 00:13:17.138 fused_ordering(653) 00:13:17.138 fused_ordering(654) 00:13:17.138 fused_ordering(655) 00:13:17.138 fused_ordering(656) 00:13:17.138 fused_ordering(657) 00:13:17.138 fused_ordering(658) 00:13:17.138 fused_ordering(659) 00:13:17.138 fused_ordering(660) 00:13:17.138 fused_ordering(661) 00:13:17.138 fused_ordering(662) 00:13:17.138 fused_ordering(663) 00:13:17.138 fused_ordering(664) 00:13:17.138 fused_ordering(665) 00:13:17.138 fused_ordering(666) 00:13:17.138 fused_ordering(667) 00:13:17.138 fused_ordering(668) 00:13:17.138 fused_ordering(669) 00:13:17.138 fused_ordering(670) 00:13:17.138 fused_ordering(671) 00:13:17.138 fused_ordering(672) 00:13:17.138 fused_ordering(673) 00:13:17.138 fused_ordering(674) 00:13:17.138 fused_ordering(675) 00:13:17.138 fused_ordering(676) 00:13:17.138 fused_ordering(677) 00:13:17.138 fused_ordering(678) 00:13:17.138 fused_ordering(679) 00:13:17.138 fused_ordering(680) 00:13:17.138 fused_ordering(681) 00:13:17.138 fused_ordering(682) 00:13:17.138 fused_ordering(683) 00:13:17.138 fused_ordering(684) 00:13:17.138 fused_ordering(685) 00:13:17.138 fused_ordering(686) 00:13:17.138 fused_ordering(687) 00:13:17.138 fused_ordering(688) 00:13:17.138 fused_ordering(689) 00:13:17.138 fused_ordering(690) 00:13:17.138 fused_ordering(691) 00:13:17.138 fused_ordering(692) 00:13:17.138 fused_ordering(693) 00:13:17.138 fused_ordering(694) 00:13:17.138 fused_ordering(695) 00:13:17.138 fused_ordering(696) 00:13:17.138 fused_ordering(697) 00:13:17.138 fused_ordering(698) 00:13:17.138 fused_ordering(699) 00:13:17.138 fused_ordering(700) 00:13:17.138 fused_ordering(701) 00:13:17.138 fused_ordering(702) 00:13:17.138 fused_ordering(703) 00:13:17.138 fused_ordering(704) 00:13:17.138 fused_ordering(705) 00:13:17.138 fused_ordering(706) 00:13:17.138 fused_ordering(707) 00:13:17.138 fused_ordering(708) 00:13:17.138 fused_ordering(709) 00:13:17.138 fused_ordering(710) 00:13:17.138 fused_ordering(711) 00:13:17.138 fused_ordering(712) 00:13:17.138 fused_ordering(713) 00:13:17.138 fused_ordering(714) 00:13:17.138 fused_ordering(715) 00:13:17.138 fused_ordering(716) 00:13:17.138 fused_ordering(717) 00:13:17.138 fused_ordering(718) 00:13:17.138 fused_ordering(719) 00:13:17.138 fused_ordering(720) 00:13:17.138 fused_ordering(721) 00:13:17.138 fused_ordering(722) 00:13:17.138 fused_ordering(723) 00:13:17.138 fused_ordering(724) 00:13:17.138 fused_ordering(725) 00:13:17.138 fused_ordering(726) 00:13:17.138 fused_ordering(727) 00:13:17.138 fused_ordering(728) 00:13:17.138 fused_ordering(729) 00:13:17.138 fused_ordering(730) 00:13:17.138 fused_ordering(731) 00:13:17.138 fused_ordering(732) 00:13:17.138 fused_ordering(733) 00:13:17.138 fused_ordering(734) 00:13:17.138 fused_ordering(735) 00:13:17.138 fused_ordering(736) 00:13:17.138 fused_ordering(737) 00:13:17.138 fused_ordering(738) 00:13:17.138 fused_ordering(739) 00:13:17.138 fused_ordering(740) 00:13:17.138 fused_ordering(741) 00:13:17.138 fused_ordering(742) 00:13:17.138 fused_ordering(743) 00:13:17.138 fused_ordering(744) 00:13:17.138 fused_ordering(745) 00:13:17.138 fused_ordering(746) 00:13:17.138 fused_ordering(747) 00:13:17.138 fused_ordering(748) 00:13:17.138 fused_ordering(749) 00:13:17.138 fused_ordering(750) 00:13:17.138 fused_ordering(751) 00:13:17.138 fused_ordering(752) 00:13:17.138 fused_ordering(753) 00:13:17.138 fused_ordering(754) 00:13:17.138 fused_ordering(755) 00:13:17.138 fused_ordering(756) 00:13:17.138 fused_ordering(757) 00:13:17.138 fused_ordering(758) 00:13:17.138 fused_ordering(759) 00:13:17.138 fused_ordering(760) 00:13:17.138 fused_ordering(761) 00:13:17.138 fused_ordering(762) 00:13:17.138 fused_ordering(763) 00:13:17.138 fused_ordering(764) 00:13:17.138 fused_ordering(765) 00:13:17.138 fused_ordering(766) 00:13:17.138 fused_ordering(767) 00:13:17.138 fused_ordering(768) 00:13:17.138 fused_ordering(769) 00:13:17.138 fused_ordering(770) 00:13:17.138 fused_ordering(771) 00:13:17.138 fused_ordering(772) 00:13:17.138 fused_ordering(773) 00:13:17.138 fused_ordering(774) 00:13:17.138 fused_ordering(775) 00:13:17.138 fused_ordering(776) 00:13:17.138 fused_ordering(777) 00:13:17.138 fused_ordering(778) 00:13:17.138 fused_ordering(779) 00:13:17.138 fused_ordering(780) 00:13:17.139 fused_ordering(781) 00:13:17.139 fused_ordering(782) 00:13:17.139 fused_ordering(783) 00:13:17.139 fused_ordering(784) 00:13:17.139 fused_ordering(785) 00:13:17.139 fused_ordering(786) 00:13:17.139 fused_ordering(787) 00:13:17.139 fused_ordering(788) 00:13:17.139 fused_ordering(789) 00:13:17.139 fused_ordering(790) 00:13:17.139 fused_ordering(791) 00:13:17.139 fused_ordering(792) 00:13:17.139 fused_ordering(793) 00:13:17.139 fused_ordering(794) 00:13:17.139 fused_ordering(795) 00:13:17.139 fused_ordering(796) 00:13:17.139 fused_ordering(797) 00:13:17.139 fused_ordering(798) 00:13:17.139 fused_ordering(799) 00:13:17.139 fused_ordering(800) 00:13:17.139 fused_ordering(801) 00:13:17.139 fused_ordering(802) 00:13:17.139 fused_ordering(803) 00:13:17.139 fused_ordering(804) 00:13:17.139 fused_ordering(805) 00:13:17.139 fused_ordering(806) 00:13:17.139 fused_ordering(807) 00:13:17.139 fused_ordering(808) 00:13:17.139 fused_ordering(809) 00:13:17.139 fused_ordering(810) 00:13:17.139 fused_ordering(811) 00:13:17.139 fused_ordering(812) 00:13:17.139 fused_ordering(813) 00:13:17.139 fused_ordering(814) 00:13:17.139 fused_ordering(815) 00:13:17.139 fused_ordering(816) 00:13:17.139 fused_ordering(817) 00:13:17.139 fused_ordering(818) 00:13:17.139 fused_ordering(819) 00:13:17.139 fused_ordering(820) 00:13:17.708 fused_ordering(821) 00:13:17.708 fused_ordering(822) 00:13:17.708 fused_ordering(823) 00:13:17.708 fused_ordering(824) 00:13:17.708 fused_ordering(825) 00:13:17.708 fused_ordering(826) 00:13:17.708 fused_ordering(827) 00:13:17.708 fused_ordering(828) 00:13:17.708 fused_ordering(829) 00:13:17.708 fused_ordering(830) 00:13:17.708 fused_ordering(831) 00:13:17.708 fused_ordering(832) 00:13:17.708 fused_ordering(833) 00:13:17.708 fused_ordering(834) 00:13:17.708 fused_ordering(835) 00:13:17.708 fused_ordering(836) 00:13:17.708 fused_ordering(837) 00:13:17.708 fused_ordering(838) 00:13:17.708 fused_ordering(839) 00:13:17.708 fused_ordering(840) 00:13:17.708 fused_ordering(841) 00:13:17.708 fused_ordering(842) 00:13:17.708 fused_ordering(843) 00:13:17.708 fused_ordering(844) 00:13:17.708 fused_ordering(845) 00:13:17.708 fused_ordering(846) 00:13:17.708 fused_ordering(847) 00:13:17.708 fused_ordering(848) 00:13:17.708 fused_ordering(849) 00:13:17.708 fused_ordering(850) 00:13:17.708 fused_ordering(851) 00:13:17.708 fused_ordering(852) 00:13:17.708 fused_ordering(853) 00:13:17.708 fused_ordering(854) 00:13:17.708 fused_ordering(855) 00:13:17.708 fused_ordering(856) 00:13:17.708 fused_ordering(857) 00:13:17.708 fused_ordering(858) 00:13:17.708 fused_ordering(859) 00:13:17.708 fused_ordering(860) 00:13:17.708 fused_ordering(861) 00:13:17.708 fused_ordering(862) 00:13:17.708 fused_ordering(863) 00:13:17.708 fused_ordering(864) 00:13:17.708 fused_ordering(865) 00:13:17.708 fused_ordering(866) 00:13:17.708 fused_ordering(867) 00:13:17.708 fused_ordering(868) 00:13:17.708 fused_ordering(869) 00:13:17.708 fused_ordering(870) 00:13:17.708 fused_ordering(871) 00:13:17.708 fused_ordering(872) 00:13:17.708 fused_ordering(873) 00:13:17.708 fused_ordering(874) 00:13:17.708 fused_ordering(875) 00:13:17.708 fused_ordering(876) 00:13:17.708 fused_ordering(877) 00:13:17.708 fused_ordering(878) 00:13:17.708 fused_ordering(879) 00:13:17.708 fused_ordering(880) 00:13:17.708 fused_ordering(881) 00:13:17.708 fused_ordering(882) 00:13:17.708 fused_ordering(883) 00:13:17.708 fused_ordering(884) 00:13:17.708 fused_ordering(885) 00:13:17.708 fused_ordering(886) 00:13:17.708 fused_ordering(887) 00:13:17.708 fused_ordering(888) 00:13:17.708 fused_ordering(889) 00:13:17.708 fused_ordering(890) 00:13:17.708 fused_ordering(891) 00:13:17.708 fused_ordering(892) 00:13:17.708 fused_ordering(893) 00:13:17.708 fused_ordering(894) 00:13:17.708 fused_ordering(895) 00:13:17.708 fused_ordering(896) 00:13:17.708 fused_ordering(897) 00:13:17.708 fused_ordering(898) 00:13:17.708 fused_ordering(899) 00:13:17.708 fused_ordering(900) 00:13:17.708 fused_ordering(901) 00:13:17.708 fused_ordering(902) 00:13:17.708 fused_ordering(903) 00:13:17.708 fused_ordering(904) 00:13:17.708 fused_ordering(905) 00:13:17.708 fused_ordering(906) 00:13:17.708 fused_ordering(907) 00:13:17.708 fused_ordering(908) 00:13:17.708 fused_ordering(909) 00:13:17.708 fused_ordering(910) 00:13:17.708 fused_ordering(911) 00:13:17.708 fused_ordering(912) 00:13:17.708 fused_ordering(913) 00:13:17.708 fused_ordering(914) 00:13:17.708 fused_ordering(915) 00:13:17.708 fused_ordering(916) 00:13:17.708 fused_ordering(917) 00:13:17.708 fused_ordering(918) 00:13:17.708 fused_ordering(919) 00:13:17.708 fused_ordering(920) 00:13:17.708 fused_ordering(921) 00:13:17.708 fused_ordering(922) 00:13:17.708 fused_ordering(923) 00:13:17.708 fused_ordering(924) 00:13:17.708 fused_ordering(925) 00:13:17.708 fused_ordering(926) 00:13:17.708 fused_ordering(927) 00:13:17.708 fused_ordering(928) 00:13:17.708 fused_ordering(929) 00:13:17.708 fused_ordering(930) 00:13:17.708 fused_ordering(931) 00:13:17.708 fused_ordering(932) 00:13:17.708 fused_ordering(933) 00:13:17.708 fused_ordering(934) 00:13:17.708 fused_ordering(935) 00:13:17.708 fused_ordering(936) 00:13:17.708 fused_ordering(937) 00:13:17.708 fused_ordering(938) 00:13:17.708 fused_ordering(939) 00:13:17.708 fused_ordering(940) 00:13:17.708 fused_ordering(941) 00:13:17.708 fused_ordering(942) 00:13:17.708 fused_ordering(943) 00:13:17.708 fused_ordering(944) 00:13:17.708 fused_ordering(945) 00:13:17.708 fused_ordering(946) 00:13:17.708 fused_ordering(947) 00:13:17.708 fused_ordering(948) 00:13:17.708 fused_ordering(949) 00:13:17.708 fused_ordering(950) 00:13:17.708 fused_ordering(951) 00:13:17.708 fused_ordering(952) 00:13:17.708 fused_ordering(953) 00:13:17.708 fused_ordering(954) 00:13:17.708 fused_ordering(955) 00:13:17.708 fused_ordering(956) 00:13:17.708 fused_ordering(957) 00:13:17.708 fused_ordering(958) 00:13:17.709 fused_ordering(959) 00:13:17.709 fused_ordering(960) 00:13:17.709 fused_ordering(961) 00:13:17.709 fused_ordering(962) 00:13:17.709 fused_ordering(963) 00:13:17.709 fused_ordering(964) 00:13:17.709 fused_ordering(965) 00:13:17.709 fused_ordering(966) 00:13:17.709 fused_ordering(967) 00:13:17.709 fused_ordering(968) 00:13:17.709 fused_ordering(969) 00:13:17.709 fused_ordering(970) 00:13:17.709 fused_ordering(971) 00:13:17.709 fused_ordering(972) 00:13:17.709 fused_ordering(973) 00:13:17.709 fused_ordering(974) 00:13:17.709 fused_ordering(975) 00:13:17.709 fused_ordering(976) 00:13:17.709 fused_ordering(977) 00:13:17.709 fused_ordering(978) 00:13:17.709 fused_ordering(979) 00:13:17.709 fused_ordering(980) 00:13:17.709 fused_ordering(981) 00:13:17.709 fused_ordering(982) 00:13:17.709 fused_ordering(983) 00:13:17.709 fused_ordering(984) 00:13:17.709 fused_ordering(985) 00:13:17.709 fused_ordering(986) 00:13:17.709 fused_ordering(987) 00:13:17.709 fused_ordering(988) 00:13:17.709 fused_ordering(989) 00:13:17.709 fused_ordering(990) 00:13:17.709 fused_ordering(991) 00:13:17.709 fused_ordering(992) 00:13:17.709 fused_ordering(993) 00:13:17.709 fused_ordering(994) 00:13:17.709 fused_ordering(995) 00:13:17.709 fused_ordering(996) 00:13:17.709 fused_ordering(997) 00:13:17.709 fused_ordering(998) 00:13:17.709 fused_ordering(999) 00:13:17.709 fused_ordering(1000) 00:13:17.709 fused_ordering(1001) 00:13:17.709 fused_ordering(1002) 00:13:17.709 fused_ordering(1003) 00:13:17.709 fused_ordering(1004) 00:13:17.709 fused_ordering(1005) 00:13:17.709 fused_ordering(1006) 00:13:17.709 fused_ordering(1007) 00:13:17.709 fused_ordering(1008) 00:13:17.709 fused_ordering(1009) 00:13:17.709 fused_ordering(1010) 00:13:17.709 fused_ordering(1011) 00:13:17.709 fused_ordering(1012) 00:13:17.709 fused_ordering(1013) 00:13:17.709 fused_ordering(1014) 00:13:17.709 fused_ordering(1015) 00:13:17.709 fused_ordering(1016) 00:13:17.709 fused_ordering(1017) 00:13:17.709 fused_ordering(1018) 00:13:17.709 fused_ordering(1019) 00:13:17.709 fused_ordering(1020) 00:13:17.709 fused_ordering(1021) 00:13:17.709 fused_ordering(1022) 00:13:17.709 fused_ordering(1023) 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.709 rmmod nvme_tcp 00:13:17.709 rmmod nvme_fabrics 00:13:17.709 rmmod nvme_keyring 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2211970 ']' 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2211970 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2211970 ']' 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2211970 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.709 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2211970 00:13:17.968 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:17.968 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:17.968 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2211970' 00:13:17.968 killing process with pid 2211970 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2211970 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2211970 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.969 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:20.506 00:13:20.506 real 0m10.655s 00:13:20.506 user 0m5.051s 00:13:20.506 sys 0m5.718s 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.506 ************************************ 00:13:20.506 END TEST nvmf_fused_ordering 00:13:20.506 ************************************ 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.506 ************************************ 00:13:20.506 START TEST nvmf_ns_masking 00:13:20.506 ************************************ 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:20.506 * Looking for test storage... 00:13:20.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:20.506 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:20.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.507 --rc genhtml_branch_coverage=1 00:13:20.507 --rc genhtml_function_coverage=1 00:13:20.507 --rc genhtml_legend=1 00:13:20.507 --rc geninfo_all_blocks=1 00:13:20.507 --rc geninfo_unexecuted_blocks=1 00:13:20.507 00:13:20.507 ' 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:20.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.507 --rc genhtml_branch_coverage=1 00:13:20.507 --rc genhtml_function_coverage=1 00:13:20.507 --rc genhtml_legend=1 00:13:20.507 --rc geninfo_all_blocks=1 00:13:20.507 --rc geninfo_unexecuted_blocks=1 00:13:20.507 00:13:20.507 ' 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:20.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.507 --rc genhtml_branch_coverage=1 00:13:20.507 --rc genhtml_function_coverage=1 00:13:20.507 --rc genhtml_legend=1 00:13:20.507 --rc geninfo_all_blocks=1 00:13:20.507 --rc geninfo_unexecuted_blocks=1 00:13:20.507 00:13:20.507 ' 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:20.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.507 --rc genhtml_branch_coverage=1 00:13:20.507 --rc genhtml_function_coverage=1 00:13:20.507 --rc genhtml_legend=1 00:13:20.507 --rc geninfo_all_blocks=1 00:13:20.507 --rc geninfo_unexecuted_blocks=1 00:13:20.507 00:13:20.507 ' 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.507 11:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4e0696da-6087-40f4-ad29-69888f39b5d1 00:13:20.507 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=efacba46-cf53-4e0e-b9c3-2543a86ffa53 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7b2ecb0b-f40e-4b5f-80c2-eaa0e160d5bf 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:20.508 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:27.081 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:27.081 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:27.081 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:27.082 Found net devices under 0000:86:00.0: cvl_0_0 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:27.082 Found net devices under 0000:86:00.1: cvl_0_1 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:27.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:13:27.082 00:13:27.082 --- 10.0.0.2 ping statistics --- 00:13:27.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.082 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:13:27.082 00:13:27.082 --- 10.0.0.1 ping statistics --- 00:13:27.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.082 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2215946 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2215946 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2215946 ']' 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.082 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:27.082 [2024-11-19 11:24:40.044655] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:27.082 [2024-11-19 11:24:40.044715] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.082 [2024-11-19 11:24:40.129402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.082 [2024-11-19 11:24:40.171487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.082 [2024-11-19 11:24:40.171522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.082 [2024-11-19 11:24:40.171530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.082 [2024-11-19 11:24:40.171536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.082 [2024-11-19 11:24:40.171541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.082 [2024-11-19 11:24:40.172117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.082 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.082 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:27.082 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.082 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:27.082 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:27.082 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.082 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:27.082 [2024-11-19 11:24:40.476917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.082 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:27.083 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:27.083 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:27.083 Malloc1 00:13:27.083 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:27.341 Malloc2 00:13:27.341 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:27.600 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:27.600 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.860 [2024-11-19 11:24:41.524183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.860 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:27.860 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7b2ecb0b-f40e-4b5f-80c2-eaa0e160d5bf -a 10.0.0.2 -s 4420 -i 4 00:13:28.119 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:28.119 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:28.119 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.119 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:28.119 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:30.024 [ 0]:0x1 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:30.024 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.283 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc6324490049443ba772236f39a1074a 00:13:30.283 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc6324490049443ba772236f39a1074a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.283 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:30.283 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:30.283 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.283 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:30.543 [ 0]:0x1 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc6324490049443ba772236f39a1074a 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc6324490049443ba772236f39a1074a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:30.543 [ 1]:0x2 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5ebccc480d2490b8adebe2dc57ab42f 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5ebccc480d2490b8adebe2dc57ab42f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.543 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.802 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:31.061 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:31.061 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7b2ecb0b-f40e-4b5f-80c2-eaa0e160d5bf -a 10.0.0.2 -s 4420 -i 4 00:13:31.061 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:31.061 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:31.061 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.061 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:31.061 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:31.061 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:33.594 [ 0]:0x2 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5ebccc480d2490b8adebe2dc57ab42f 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5ebccc480d2490b8adebe2dc57ab42f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.594 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:33.594 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:33.595 [ 0]:0x1 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc6324490049443ba772236f39a1074a 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc6324490049443ba772236f39a1074a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:33.595 [ 1]:0x2 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5ebccc480d2490b8adebe2dc57ab42f 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5ebccc480d2490b8adebe2dc57ab42f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.595 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:33.854 [ 0]:0x2 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5ebccc480d2490b8adebe2dc57ab42f 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5ebccc480d2490b8adebe2dc57ab42f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:33.854 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.113 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:34.113 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:34.113 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7b2ecb0b-f40e-4b5f-80c2-eaa0e160d5bf -a 10.0.0.2 -s 4420 -i 4 00:13:34.372 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:34.372 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:34.372 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.372 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:34.372 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:34.372 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:36.274 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:36.274 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:36.274 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.274 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:36.274 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.274 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:36.274 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:36.274 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:36.274 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:36.274 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:36.274 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:36.274 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:36.274 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:36.274 [ 0]:0x1 00:13:36.274 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:36.274 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:36.533 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc6324490049443ba772236f39a1074a 00:13:36.533 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc6324490049443ba772236f39a1074a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:36.533 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:36.533 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:36.533 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:36.533 [ 1]:0x2 00:13:36.533 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:36.533 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:36.533 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5ebccc480d2490b8adebe2dc57ab42f 00:13:36.533 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5ebccc480d2490b8adebe2dc57ab42f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:36.533 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:36.792 [ 0]:0x2 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5ebccc480d2490b8adebe2dc57ab42f 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5ebccc480d2490b8adebe2dc57ab42f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:36.792 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:37.052 [2024-11-19 11:24:50.622192] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:37.052 request: 00:13:37.052 { 00:13:37.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.052 "nsid": 2, 00:13:37.052 "host": "nqn.2016-06.io.spdk:host1", 00:13:37.052 "method": "nvmf_ns_remove_host", 00:13:37.052 "req_id": 1 00:13:37.052 } 00:13:37.052 Got JSON-RPC error response 00:13:37.052 response: 00:13:37.052 { 00:13:37.052 "code": -32602, 00:13:37.052 "message": "Invalid parameters" 00:13:37.052 } 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:37.052 [ 0]:0x2 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5ebccc480d2490b8adebe2dc57ab42f 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5ebccc480d2490b8adebe2dc57ab42f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:37.052 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.312 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2217759 00:13:37.312 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.312 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2217759 /var/tmp/host.sock 00:13:37.312 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:37.312 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2217759 ']' 00:13:37.312 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:37.312 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.312 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:37.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:37.312 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.312 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:37.312 [2024-11-19 11:24:50.995004] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:37.312 [2024-11-19 11:24:50.995055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217759 ] 00:13:37.312 [2024-11-19 11:24:51.070518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.570 [2024-11-19 11:24:51.112424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.570 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.570 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:37.570 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.829 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.088 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4e0696da-6087-40f4-ad29-69888f39b5d1 00:13:38.088 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:38.088 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4E0696DA608740F4AD2969888F39B5D1 -i 00:13:38.347 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid efacba46-cf53-4e0e-b9c3-2543a86ffa53 00:13:38.347 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:38.347 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g EFACBA46CF534E0EB9C32543A86FFA53 -i 00:13:38.606 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:38.606 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:38.864 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:38.864 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:39.123 nvme0n1 00:13:39.123 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:39.123 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:39.691 nvme1n2 00:13:39.691 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:39.691 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:39.691 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:39.691 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:39.691 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:39.950 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:39.950 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:39.950 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:39.950 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:39.950 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4e0696da-6087-40f4-ad29-69888f39b5d1 == \4\e\0\6\9\6\d\a\-\6\0\8\7\-\4\0\f\4\-\a\d\2\9\-\6\9\8\8\8\f\3\9\b\5\d\1 ]] 00:13:39.950 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:39.950 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:39.950 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:40.209 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ efacba46-cf53-4e0e-b9c3-2543a86ffa53 == \e\f\a\c\b\a\4\6\-\c\f\5\3\-\4\e\0\e\-\b\9\c\3\-\2\5\4\3\a\8\6\f\f\a\5\3 ]] 00:13:40.209 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.468 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4e0696da-6087-40f4-ad29-69888f39b5d1 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4E0696DA608740F4AD2969888F39B5D1 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4E0696DA608740F4AD2969888F39B5D1 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.727 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:40.728 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4E0696DA608740F4AD2969888F39B5D1 00:13:40.728 [2024-11-19 11:24:54.492841] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:40.728 [2024-11-19 11:24:54.492872] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:40.728 [2024-11-19 11:24:54.492880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.728 request: 00:13:40.728 { 00:13:40.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:40.728 "namespace": { 00:13:40.728 "bdev_name": "invalid", 00:13:40.728 "nsid": 1, 00:13:40.728 "nguid": "4E0696DA608740F4AD2969888F39B5D1", 00:13:40.728 "no_auto_visible": false 00:13:40.728 }, 00:13:40.728 "method": "nvmf_subsystem_add_ns", 00:13:40.728 "req_id": 1 00:13:40.728 } 00:13:40.728 Got JSON-RPC error response 00:13:40.728 response: 00:13:40.728 { 00:13:40.728 "code": -32602, 00:13:40.728 "message": "Invalid parameters" 00:13:40.728 } 00:13:40.986 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:40.986 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:40.986 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:40.986 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:40.986 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4e0696da-6087-40f4-ad29-69888f39b5d1 00:13:40.986 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:40.986 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4E0696DA608740F4AD2969888F39B5D1 -i 00:13:40.986 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:43.519 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:43.519 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:43.519 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:43.519 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:43.519 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2217759 00:13:43.519 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2217759 ']' 00:13:43.519 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2217759 00:13:43.519 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:43.519 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.519 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2217759 00:13:43.519 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:43.519 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:43.519 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2217759' 00:13:43.519 killing process with pid 2217759 00:13:43.519 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2217759 00:13:43.519 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2217759 00:13:43.778 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.778 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:43.778 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:43.778 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:43.778 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:43.778 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:43.778 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:43.778 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:43.778 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:43.778 rmmod nvme_tcp 00:13:43.778 rmmod nvme_fabrics 00:13:44.036 rmmod nvme_keyring 00:13:44.036 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:44.036 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:44.036 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:44.036 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2215946 ']' 00:13:44.036 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2215946 00:13:44.036 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2215946 ']' 00:13:44.036 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2215946 00:13:44.036 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:44.037 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.037 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2215946 00:13:44.037 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.037 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.037 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2215946' 00:13:44.037 killing process with pid 2215946 00:13:44.037 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2215946 00:13:44.037 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2215946 00:13:44.295 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:44.296 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:44.296 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:44.296 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:44.296 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:44.296 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:44.296 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:44.296 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:44.296 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:44.296 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.296 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.296 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.320 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:46.320 00:13:46.320 real 0m26.101s 00:13:46.320 user 0m31.355s 00:13:46.320 sys 0m7.214s 00:13:46.320 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.320 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:46.320 ************************************ 00:13:46.320 END TEST nvmf_ns_masking 00:13:46.320 ************************************ 00:13:46.320 11:24:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:46.320 11:24:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:46.320 11:24:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:46.320 11:24:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.320 11:24:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:46.320 ************************************ 00:13:46.320 START TEST nvmf_nvme_cli 00:13:46.320 ************************************ 00:13:46.320 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:46.320 * Looking for test storage... 00:13:46.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.320 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:46.320 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:46.320 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:46.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.580 --rc genhtml_branch_coverage=1 00:13:46.580 --rc genhtml_function_coverage=1 00:13:46.580 --rc genhtml_legend=1 00:13:46.580 --rc geninfo_all_blocks=1 00:13:46.580 --rc geninfo_unexecuted_blocks=1 00:13:46.580 00:13:46.580 ' 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:46.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.580 --rc genhtml_branch_coverage=1 00:13:46.580 --rc genhtml_function_coverage=1 00:13:46.580 --rc genhtml_legend=1 00:13:46.580 --rc geninfo_all_blocks=1 00:13:46.580 --rc geninfo_unexecuted_blocks=1 00:13:46.580 00:13:46.580 ' 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:46.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.580 --rc genhtml_branch_coverage=1 00:13:46.580 --rc genhtml_function_coverage=1 00:13:46.580 --rc genhtml_legend=1 00:13:46.580 --rc geninfo_all_blocks=1 00:13:46.580 --rc geninfo_unexecuted_blocks=1 00:13:46.580 00:13:46.580 ' 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:46.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.580 --rc genhtml_branch_coverage=1 00:13:46.580 --rc genhtml_function_coverage=1 00:13:46.580 --rc genhtml_legend=1 00:13:46.580 --rc geninfo_all_blocks=1 00:13:46.580 --rc geninfo_unexecuted_blocks=1 00:13:46.580 00:13:46.580 ' 00:13:46.580 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:46.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:46.581 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:53.149 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:53.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:53.149 Found net devices under 0000:86:00.0: cvl_0_0 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:53.149 Found net devices under 0000:86:00.1: cvl_0_1 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:53.149 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.149 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:53.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:13:53.150 00:13:53.150 --- 10.0.0.2 ping statistics --- 00:13:53.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.150 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:13:53.150 00:13:53.150 --- 10.0.0.1 ping statistics --- 00:13:53.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.150 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2222479 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2222479 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2222479 ']' 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.150 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.150 [2024-11-19 11:25:06.186380] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:53.150 [2024-11-19 11:25:06.186425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.150 [2024-11-19 11:25:06.265030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.150 [2024-11-19 11:25:06.308530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.150 [2024-11-19 11:25:06.308568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.150 [2024-11-19 11:25:06.308576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.150 [2024-11-19 11:25:06.308582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.150 [2024-11-19 11:25:06.308587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.150 [2024-11-19 11:25:06.310179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.150 [2024-11-19 11:25:06.310307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.150 [2024-11-19 11:25:06.310416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.150 [2024-11-19 11:25:06.310417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.408 [2024-11-19 11:25:07.069742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.408 Malloc0 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.408 Malloc1 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.408 [2024-11-19 11:25:07.160369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:53.408 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.409 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.409 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.409 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:53.666 00:13:53.666 Discovery Log Number of Records 2, Generation counter 2 00:13:53.666 =====Discovery Log Entry 0====== 00:13:53.666 trtype: tcp 00:13:53.666 adrfam: ipv4 00:13:53.666 subtype: current discovery subsystem 00:13:53.666 treq: not required 00:13:53.666 portid: 0 00:13:53.666 trsvcid: 4420 00:13:53.666 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:53.666 traddr: 10.0.0.2 00:13:53.666 eflags: explicit discovery connections, duplicate discovery information 00:13:53.666 sectype: none 00:13:53.666 =====Discovery Log Entry 1====== 00:13:53.666 trtype: tcp 00:13:53.666 adrfam: ipv4 00:13:53.666 subtype: nvme subsystem 00:13:53.666 treq: not required 00:13:53.666 portid: 0 00:13:53.666 trsvcid: 4420 00:13:53.666 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:53.666 traddr: 10.0.0.2 00:13:53.666 eflags: none 00:13:53.666 sectype: none 00:13:53.666 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:53.666 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:53.666 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:53.666 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:53.666 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:53.666 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:53.666 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:53.666 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:53.666 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:53.666 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:53.666 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:55.039 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:55.039 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:55.039 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.039 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:55.039 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:55.039 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:56.936 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:56.936 /dev/nvme0n2 ]] 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:56.937 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:57.196 rmmod nvme_tcp 00:13:57.196 rmmod nvme_fabrics 00:13:57.196 rmmod nvme_keyring 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2222479 ']' 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2222479 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2222479 ']' 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2222479 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2222479 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2222479' 00:13:57.196 killing process with pid 2222479 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2222479 00:13:57.196 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2222479 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.455 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.361 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:59.361 00:13:59.361 real 0m13.149s 00:13:59.361 user 0m20.680s 00:13:59.361 sys 0m5.225s 00:13:59.361 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.361 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.361 ************************************ 00:13:59.361 END TEST nvmf_nvme_cli 00:13:59.361 ************************************ 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:59.620 ************************************ 00:13:59.620 START TEST nvmf_vfio_user 00:13:59.620 ************************************ 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:59.620 * Looking for test storage... 00:13:59.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.620 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:59.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.621 --rc genhtml_branch_coverage=1 00:13:59.621 --rc genhtml_function_coverage=1 00:13:59.621 --rc genhtml_legend=1 00:13:59.621 --rc geninfo_all_blocks=1 00:13:59.621 --rc geninfo_unexecuted_blocks=1 00:13:59.621 00:13:59.621 ' 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:59.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.621 --rc genhtml_branch_coverage=1 00:13:59.621 --rc genhtml_function_coverage=1 00:13:59.621 --rc genhtml_legend=1 00:13:59.621 --rc geninfo_all_blocks=1 00:13:59.621 --rc geninfo_unexecuted_blocks=1 00:13:59.621 00:13:59.621 ' 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:59.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.621 --rc genhtml_branch_coverage=1 00:13:59.621 --rc genhtml_function_coverage=1 00:13:59.621 --rc genhtml_legend=1 00:13:59.621 --rc geninfo_all_blocks=1 00:13:59.621 --rc geninfo_unexecuted_blocks=1 00:13:59.621 00:13:59.621 ' 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:59.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.621 --rc genhtml_branch_coverage=1 00:13:59.621 --rc genhtml_function_coverage=1 00:13:59.621 --rc genhtml_legend=1 00:13:59.621 --rc geninfo_all_blocks=1 00:13:59.621 --rc geninfo_unexecuted_blocks=1 00:13:59.621 00:13:59.621 ' 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.621 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.880 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:59.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2223773 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2223773' 00:13:59.881 Process pid: 2223773 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2223773 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2223773 ']' 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.881 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:59.881 [2024-11-19 11:25:13.478485] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:59.881 [2024-11-19 11:25:13.478532] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.881 [2024-11-19 11:25:13.554759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.881 [2024-11-19 11:25:13.597520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.881 [2024-11-19 11:25:13.597559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.881 [2024-11-19 11:25:13.597566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.881 [2024-11-19 11:25:13.597572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.881 [2024-11-19 11:25:13.597577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.881 [2024-11-19 11:25:13.599109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.881 [2024-11-19 11:25:13.599219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.881 [2024-11-19 11:25:13.599329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.881 [2024-11-19 11:25:13.599329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.145 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.145 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:00.145 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:01.079 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:01.337 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:01.337 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:01.338 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:01.338 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:01.338 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:01.596 Malloc1 00:14:01.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:01.596 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:01.854 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:02.112 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:02.112 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:02.112 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:02.370 Malloc2 00:14:02.370 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:02.627 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:02.628 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:02.885 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:02.885 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:02.885 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:02.886 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:02.886 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:02.886 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:02.886 [2024-11-19 11:25:16.599740] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:02.886 [2024-11-19 11:25:16.599772] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224452 ] 00:14:02.886 [2024-11-19 11:25:16.643035] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:02.886 [2024-11-19 11:25:16.645417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:02.886 [2024-11-19 11:25:16.645439] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8baf4df000 00:14:02.886 [2024-11-19 11:25:16.646420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.886 [2024-11-19 11:25:16.647416] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.886 [2024-11-19 11:25:16.648426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.886 [2024-11-19 11:25:16.649429] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:02.886 [2024-11-19 11:25:16.650438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:02.886 [2024-11-19 11:25:16.651443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.886 [2024-11-19 11:25:16.652451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:02.886 [2024-11-19 11:25:16.653458] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.886 [2024-11-19 11:25:16.654468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:02.886 [2024-11-19 11:25:16.654477] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8baf4d4000 00:14:02.886 [2024-11-19 11:25:16.655420] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:03.145 [2024-11-19 11:25:16.668027] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:03.145 [2024-11-19 11:25:16.668055] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:03.145 [2024-11-19 11:25:16.673582] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:03.145 [2024-11-19 11:25:16.673622] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:03.145 [2024-11-19 11:25:16.673689] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:03.145 [2024-11-19 11:25:16.673704] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:03.145 [2024-11-19 11:25:16.673710] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:03.145 [2024-11-19 11:25:16.674578] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:03.145 [2024-11-19 11:25:16.674588] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:03.145 [2024-11-19 11:25:16.674594] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:03.145 [2024-11-19 11:25:16.675591] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:03.145 [2024-11-19 11:25:16.675599] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:03.145 [2024-11-19 11:25:16.675606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:03.145 [2024-11-19 11:25:16.676597] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:03.145 [2024-11-19 11:25:16.676606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:03.145 [2024-11-19 11:25:16.677600] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:03.145 [2024-11-19 11:25:16.677609] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:03.145 [2024-11-19 11:25:16.677613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:03.145 [2024-11-19 11:25:16.677619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:03.145 [2024-11-19 11:25:16.677726] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:03.145 [2024-11-19 11:25:16.677731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:03.145 [2024-11-19 11:25:16.677736] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:03.145 [2024-11-19 11:25:16.678611] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:03.145 [2024-11-19 11:25:16.679614] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:03.145 [2024-11-19 11:25:16.680626] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:03.145 [2024-11-19 11:25:16.681618] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:03.145 [2024-11-19 11:25:16.681681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:03.145 [2024-11-19 11:25:16.682638] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:03.145 [2024-11-19 11:25:16.682648] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:03.145 [2024-11-19 11:25:16.682652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:03.145 [2024-11-19 11:25:16.682669] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:03.146 [2024-11-19 11:25:16.682680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.682695] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:03.146 [2024-11-19 11:25:16.682701] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:03.146 [2024-11-19 11:25:16.682704] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:03.146 [2024-11-19 11:25:16.682718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.682753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:03.146 [2024-11-19 11:25:16.682762] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:03.146 [2024-11-19 11:25:16.682766] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:03.146 [2024-11-19 11:25:16.682770] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:03.146 [2024-11-19 11:25:16.682774] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:03.146 [2024-11-19 11:25:16.682780] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:03.146 [2024-11-19 11:25:16.682785] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:03.146 [2024-11-19 11:25:16.682789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.682798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.682807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.682822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:03.146 [2024-11-19 11:25:16.682832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.146 [2024-11-19 11:25:16.682840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.146 [2024-11-19 11:25:16.682848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.146 [2024-11-19 11:25:16.682855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.146 [2024-11-19 11:25:16.682859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.682866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.682876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.682882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:03.146 [2024-11-19 11:25:16.682889] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:03.146 [2024-11-19 11:25:16.682894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.682900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.682906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.682913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.682923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:03.146 [2024-11-19 11:25:16.682978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.682986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.682993] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:03.146 [2024-11-19 11:25:16.682997] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:03.146 [2024-11-19 11:25:16.683000] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:03.146 [2024-11-19 11:25:16.683005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.683019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:03.146 [2024-11-19 11:25:16.683027] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:03.146 [2024-11-19 11:25:16.683035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.683043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.683049] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:03.146 [2024-11-19 11:25:16.683053] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:03.146 [2024-11-19 11:25:16.683056] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:03.146 [2024-11-19 11:25:16.683062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.683083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:03.146 [2024-11-19 11:25:16.683095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.683103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.683110] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:03.146 [2024-11-19 11:25:16.683114] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:03.146 [2024-11-19 11:25:16.683117] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:03.146 [2024-11-19 11:25:16.683122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.683132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:03.146 [2024-11-19 11:25:16.683140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.683146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.683154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.683159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.683163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.683168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.683173] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:03.146 [2024-11-19 11:25:16.683177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:03.146 [2024-11-19 11:25:16.683181] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:03.146 [2024-11-19 11:25:16.683199] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.683208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:03.146 [2024-11-19 11:25:16.683218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.683227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:03.146 [2024-11-19 11:25:16.683237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.683247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:03.146 [2024-11-19 11:25:16.683257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.683269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:03.146 [2024-11-19 11:25:16.683281] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:03.146 [2024-11-19 11:25:16.683285] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:03.146 [2024-11-19 11:25:16.683288] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:03.146 [2024-11-19 11:25:16.683291] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:03.146 [2024-11-19 11:25:16.683294] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:03.146 [2024-11-19 11:25:16.683301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:03.146 [2024-11-19 11:25:16.683308] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:03.146 [2024-11-19 11:25:16.683311] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:03.146 [2024-11-19 11:25:16.683315] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:03.146 [2024-11-19 11:25:16.683320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:03.146 [2024-11-19 11:25:16.683326] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:03.146 [2024-11-19 11:25:16.683330] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:03.147 [2024-11-19 11:25:16.683333] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:03.147 [2024-11-19 11:25:16.683338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:03.147 [2024-11-19 11:25:16.683345] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:03.147 [2024-11-19 11:25:16.683349] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:03.147 [2024-11-19 11:25:16.683352] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:03.147 [2024-11-19 11:25:16.683357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:03.147 [2024-11-19 11:25:16.683363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:03.147 [2024-11-19 11:25:16.683375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:03.147 [2024-11-19 11:25:16.683384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:03.147 [2024-11-19 11:25:16.683390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:03.147 ===================================================== 00:14:03.147 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:03.147 ===================================================== 00:14:03.147 Controller Capabilities/Features 00:14:03.147 ================================ 00:14:03.147 Vendor ID: 4e58 00:14:03.147 Subsystem Vendor ID: 4e58 00:14:03.147 Serial Number: SPDK1 00:14:03.147 Model Number: SPDK bdev Controller 00:14:03.147 Firmware Version: 25.01 00:14:03.147 Recommended Arb Burst: 6 00:14:03.147 IEEE OUI Identifier: 8d 6b 50 00:14:03.147 Multi-path I/O 00:14:03.147 May have multiple subsystem ports: Yes 00:14:03.147 May have multiple controllers: Yes 00:14:03.147 Associated with SR-IOV VF: No 00:14:03.147 Max Data Transfer Size: 131072 00:14:03.147 Max Number of Namespaces: 32 00:14:03.147 Max Number of I/O Queues: 127 00:14:03.147 NVMe Specification Version (VS): 1.3 00:14:03.147 NVMe Specification Version (Identify): 1.3 00:14:03.147 Maximum Queue Entries: 256 00:14:03.147 Contiguous Queues Required: Yes 00:14:03.147 Arbitration Mechanisms Supported 00:14:03.147 Weighted Round Robin: Not Supported 00:14:03.147 Vendor Specific: Not Supported 00:14:03.147 Reset Timeout: 15000 ms 00:14:03.147 Doorbell Stride: 4 bytes 00:14:03.147 NVM Subsystem Reset: Not Supported 00:14:03.147 Command Sets Supported 00:14:03.147 NVM Command Set: Supported 00:14:03.147 Boot Partition: Not Supported 00:14:03.147 Memory Page Size Minimum: 4096 bytes 00:14:03.147 Memory Page Size Maximum: 4096 bytes 00:14:03.147 Persistent Memory Region: Not Supported 00:14:03.147 Optional Asynchronous Events Supported 00:14:03.147 Namespace Attribute Notices: Supported 00:14:03.147 Firmware Activation Notices: Not Supported 00:14:03.147 ANA Change Notices: Not Supported 00:14:03.147 PLE Aggregate Log Change Notices: Not Supported 00:14:03.147 LBA Status Info Alert Notices: Not Supported 00:14:03.147 EGE Aggregate Log Change Notices: Not Supported 00:14:03.147 Normal NVM Subsystem Shutdown event: Not Supported 00:14:03.147 Zone Descriptor Change Notices: Not Supported 00:14:03.147 Discovery Log Change Notices: Not Supported 00:14:03.147 Controller Attributes 00:14:03.147 128-bit Host Identifier: Supported 00:14:03.147 Non-Operational Permissive Mode: Not Supported 00:14:03.147 NVM Sets: Not Supported 00:14:03.147 Read Recovery Levels: Not Supported 00:14:03.147 Endurance Groups: Not Supported 00:14:03.147 Predictable Latency Mode: Not Supported 00:14:03.147 Traffic Based Keep ALive: Not Supported 00:14:03.147 Namespace Granularity: Not Supported 00:14:03.147 SQ Associations: Not Supported 00:14:03.147 UUID List: Not Supported 00:14:03.147 Multi-Domain Subsystem: Not Supported 00:14:03.147 Fixed Capacity Management: Not Supported 00:14:03.147 Variable Capacity Management: Not Supported 00:14:03.147 Delete Endurance Group: Not Supported 00:14:03.147 Delete NVM Set: Not Supported 00:14:03.147 Extended LBA Formats Supported: Not Supported 00:14:03.147 Flexible Data Placement Supported: Not Supported 00:14:03.147 00:14:03.147 Controller Memory Buffer Support 00:14:03.147 ================================ 00:14:03.147 Supported: No 00:14:03.147 00:14:03.147 Persistent Memory Region Support 00:14:03.147 ================================ 00:14:03.147 Supported: No 00:14:03.147 00:14:03.147 Admin Command Set Attributes 00:14:03.147 ============================ 00:14:03.147 Security Send/Receive: Not Supported 00:14:03.147 Format NVM: Not Supported 00:14:03.147 Firmware Activate/Download: Not Supported 00:14:03.147 Namespace Management: Not Supported 00:14:03.147 Device Self-Test: Not Supported 00:14:03.147 Directives: Not Supported 00:14:03.147 NVMe-MI: Not Supported 00:14:03.147 Virtualization Management: Not Supported 00:14:03.147 Doorbell Buffer Config: Not Supported 00:14:03.147 Get LBA Status Capability: Not Supported 00:14:03.147 Command & Feature Lockdown Capability: Not Supported 00:14:03.147 Abort Command Limit: 4 00:14:03.147 Async Event Request Limit: 4 00:14:03.147 Number of Firmware Slots: N/A 00:14:03.147 Firmware Slot 1 Read-Only: N/A 00:14:03.147 Firmware Activation Without Reset: N/A 00:14:03.147 Multiple Update Detection Support: N/A 00:14:03.147 Firmware Update Granularity: No Information Provided 00:14:03.147 Per-Namespace SMART Log: No 00:14:03.147 Asymmetric Namespace Access Log Page: Not Supported 00:14:03.147 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:03.147 Command Effects Log Page: Supported 00:14:03.147 Get Log Page Extended Data: Supported 00:14:03.147 Telemetry Log Pages: Not Supported 00:14:03.147 Persistent Event Log Pages: Not Supported 00:14:03.147 Supported Log Pages Log Page: May Support 00:14:03.147 Commands Supported & Effects Log Page: Not Supported 00:14:03.147 Feature Identifiers & Effects Log Page:May Support 00:14:03.147 NVMe-MI Commands & Effects Log Page: May Support 00:14:03.147 Data Area 4 for Telemetry Log: Not Supported 00:14:03.147 Error Log Page Entries Supported: 128 00:14:03.147 Keep Alive: Supported 00:14:03.147 Keep Alive Granularity: 10000 ms 00:14:03.147 00:14:03.147 NVM Command Set Attributes 00:14:03.147 ========================== 00:14:03.147 Submission Queue Entry Size 00:14:03.147 Max: 64 00:14:03.147 Min: 64 00:14:03.147 Completion Queue Entry Size 00:14:03.147 Max: 16 00:14:03.147 Min: 16 00:14:03.147 Number of Namespaces: 32 00:14:03.147 Compare Command: Supported 00:14:03.147 Write Uncorrectable Command: Not Supported 00:14:03.147 Dataset Management Command: Supported 00:14:03.147 Write Zeroes Command: Supported 00:14:03.147 Set Features Save Field: Not Supported 00:14:03.147 Reservations: Not Supported 00:14:03.147 Timestamp: Not Supported 00:14:03.147 Copy: Supported 00:14:03.147 Volatile Write Cache: Present 00:14:03.147 Atomic Write Unit (Normal): 1 00:14:03.147 Atomic Write Unit (PFail): 1 00:14:03.147 Atomic Compare & Write Unit: 1 00:14:03.147 Fused Compare & Write: Supported 00:14:03.147 Scatter-Gather List 00:14:03.147 SGL Command Set: Supported (Dword aligned) 00:14:03.147 SGL Keyed: Not Supported 00:14:03.147 SGL Bit Bucket Descriptor: Not Supported 00:14:03.147 SGL Metadata Pointer: Not Supported 00:14:03.147 Oversized SGL: Not Supported 00:14:03.147 SGL Metadata Address: Not Supported 00:14:03.147 SGL Offset: Not Supported 00:14:03.147 Transport SGL Data Block: Not Supported 00:14:03.147 Replay Protected Memory Block: Not Supported 00:14:03.147 00:14:03.147 Firmware Slot Information 00:14:03.147 ========================= 00:14:03.147 Active slot: 1 00:14:03.147 Slot 1 Firmware Revision: 25.01 00:14:03.147 00:14:03.147 00:14:03.147 Commands Supported and Effects 00:14:03.147 ============================== 00:14:03.147 Admin Commands 00:14:03.147 -------------- 00:14:03.147 Get Log Page (02h): Supported 00:14:03.147 Identify (06h): Supported 00:14:03.147 Abort (08h): Supported 00:14:03.147 Set Features (09h): Supported 00:14:03.147 Get Features (0Ah): Supported 00:14:03.147 Asynchronous Event Request (0Ch): Supported 00:14:03.147 Keep Alive (18h): Supported 00:14:03.147 I/O Commands 00:14:03.147 ------------ 00:14:03.147 Flush (00h): Supported LBA-Change 00:14:03.147 Write (01h): Supported LBA-Change 00:14:03.147 Read (02h): Supported 00:14:03.147 Compare (05h): Supported 00:14:03.147 Write Zeroes (08h): Supported LBA-Change 00:14:03.147 Dataset Management (09h): Supported LBA-Change 00:14:03.147 Copy (19h): Supported LBA-Change 00:14:03.147 00:14:03.148 Error Log 00:14:03.148 ========= 00:14:03.148 00:14:03.148 Arbitration 00:14:03.148 =========== 00:14:03.148 Arbitration Burst: 1 00:14:03.148 00:14:03.148 Power Management 00:14:03.148 ================ 00:14:03.148 Number of Power States: 1 00:14:03.148 Current Power State: Power State #0 00:14:03.148 Power State #0: 00:14:03.148 Max Power: 0.00 W 00:14:03.148 Non-Operational State: Operational 00:14:03.148 Entry Latency: Not Reported 00:14:03.148 Exit Latency: Not Reported 00:14:03.148 Relative Read Throughput: 0 00:14:03.148 Relative Read Latency: 0 00:14:03.148 Relative Write Throughput: 0 00:14:03.148 Relative Write Latency: 0 00:14:03.148 Idle Power: Not Reported 00:14:03.148 Active Power: Not Reported 00:14:03.148 Non-Operational Permissive Mode: Not Supported 00:14:03.148 00:14:03.148 Health Information 00:14:03.148 ================== 00:14:03.148 Critical Warnings: 00:14:03.148 Available Spare Space: OK 00:14:03.148 Temperature: OK 00:14:03.148 Device Reliability: OK 00:14:03.148 Read Only: No 00:14:03.148 Volatile Memory Backup: OK 00:14:03.148 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:03.148 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:03.148 Available Spare: 0% 00:14:03.148 Available Sp[2024-11-19 11:25:16.683478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:03.148 [2024-11-19 11:25:16.683487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:03.148 [2024-11-19 11:25:16.683511] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:03.148 [2024-11-19 11:25:16.683520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.148 [2024-11-19 11:25:16.683526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.148 [2024-11-19 11:25:16.683531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.148 [2024-11-19 11:25:16.683537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.148 [2024-11-19 11:25:16.683648] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:03.148 [2024-11-19 11:25:16.683658] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:03.148 [2024-11-19 11:25:16.684655] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:03.148 [2024-11-19 11:25:16.684704] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:03.148 [2024-11-19 11:25:16.684713] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:03.148 [2024-11-19 11:25:16.685657] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:03.148 [2024-11-19 11:25:16.685668] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:03.148 [2024-11-19 11:25:16.685716] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:03.148 [2024-11-19 11:25:16.687691] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:03.148 are Threshold: 0% 00:14:03.148 Life Percentage Used: 0% 00:14:03.148 Data Units Read: 0 00:14:03.148 Data Units Written: 0 00:14:03.148 Host Read Commands: 0 00:14:03.148 Host Write Commands: 0 00:14:03.148 Controller Busy Time: 0 minutes 00:14:03.148 Power Cycles: 0 00:14:03.148 Power On Hours: 0 hours 00:14:03.148 Unsafe Shutdowns: 0 00:14:03.148 Unrecoverable Media Errors: 0 00:14:03.148 Lifetime Error Log Entries: 0 00:14:03.148 Warning Temperature Time: 0 minutes 00:14:03.148 Critical Temperature Time: 0 minutes 00:14:03.148 00:14:03.148 Number of Queues 00:14:03.148 ================ 00:14:03.148 Number of I/O Submission Queues: 127 00:14:03.148 Number of I/O Completion Queues: 127 00:14:03.148 00:14:03.148 Active Namespaces 00:14:03.148 ================= 00:14:03.148 Namespace ID:1 00:14:03.148 Error Recovery Timeout: Unlimited 00:14:03.148 Command Set Identifier: NVM (00h) 00:14:03.148 Deallocate: Supported 00:14:03.148 Deallocated/Unwritten Error: Not Supported 00:14:03.148 Deallocated Read Value: Unknown 00:14:03.148 Deallocate in Write Zeroes: Not Supported 00:14:03.148 Deallocated Guard Field: 0xFFFF 00:14:03.148 Flush: Supported 00:14:03.148 Reservation: Supported 00:14:03.148 Namespace Sharing Capabilities: Multiple Controllers 00:14:03.148 Size (in LBAs): 131072 (0GiB) 00:14:03.148 Capacity (in LBAs): 131072 (0GiB) 00:14:03.148 Utilization (in LBAs): 131072 (0GiB) 00:14:03.148 NGUID: 617ED97D885B4C8D8B18821AD2145184 00:14:03.148 UUID: 617ed97d-885b-4c8d-8b18-821ad2145184 00:14:03.148 Thin Provisioning: Not Supported 00:14:03.148 Per-NS Atomic Units: Yes 00:14:03.148 Atomic Boundary Size (Normal): 0 00:14:03.148 Atomic Boundary Size (PFail): 0 00:14:03.148 Atomic Boundary Offset: 0 00:14:03.148 Maximum Single Source Range Length: 65535 00:14:03.148 Maximum Copy Length: 65535 00:14:03.148 Maximum Source Range Count: 1 00:14:03.148 NGUID/EUI64 Never Reused: No 00:14:03.148 Namespace Write Protected: No 00:14:03.148 Number of LBA Formats: 1 00:14:03.148 Current LBA Format: LBA Format #00 00:14:03.148 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:03.148 00:14:03.148 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:03.148 [2024-11-19 11:25:16.921762] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:08.410 Initializing NVMe Controllers 00:14:08.410 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:08.410 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:08.410 Initialization complete. Launching workers. 00:14:08.410 ======================================================== 00:14:08.410 Latency(us) 00:14:08.410 Device Information : IOPS MiB/s Average min max 00:14:08.410 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39931.88 155.98 3205.28 974.72 6668.69 00:14:08.410 ======================================================== 00:14:08.410 Total : 39931.88 155.98 3205.28 974.72 6668.69 00:14:08.410 00:14:08.410 [2024-11-19 11:25:21.943422] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:08.410 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:08.410 [2024-11-19 11:25:22.180534] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:13.669 Initializing NVMe Controllers 00:14:13.669 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:13.669 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:13.669 Initialization complete. Launching workers. 00:14:13.669 ======================================================== 00:14:13.669 Latency(us) 00:14:13.669 Device Information : IOPS MiB/s Average min max 00:14:13.669 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15997.10 62.49 8006.80 4984.85 15962.43 00:14:13.669 ======================================================== 00:14:13.669 Total : 15997.10 62.49 8006.80 4984.85 15962.43 00:14:13.669 00:14:13.669 [2024-11-19 11:25:27.222985] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:13.669 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:13.669 [2024-11-19 11:25:27.431996] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:18.929 [2024-11-19 11:25:32.551462] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:18.929 Initializing NVMe Controllers 00:14:18.929 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:18.929 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:18.929 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:18.929 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:18.929 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:18.929 Initialization complete. Launching workers. 00:14:18.929 Starting thread on core 2 00:14:18.929 Starting thread on core 3 00:14:18.929 Starting thread on core 1 00:14:18.929 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:19.186 [2024-11-19 11:25:32.851227] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:22.466 [2024-11-19 11:25:35.927135] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:22.466 Initializing NVMe Controllers 00:14:22.466 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:22.466 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:22.466 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:22.466 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:22.466 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:22.466 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:22.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:22.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:22.466 Initialization complete. Launching workers. 00:14:22.466 Starting thread on core 1 with urgent priority queue 00:14:22.466 Starting thread on core 2 with urgent priority queue 00:14:22.466 Starting thread on core 3 with urgent priority queue 00:14:22.466 Starting thread on core 0 with urgent priority queue 00:14:22.466 SPDK bdev Controller (SPDK1 ) core 0: 8000.33 IO/s 12.50 secs/100000 ios 00:14:22.466 SPDK bdev Controller (SPDK1 ) core 1: 7712.67 IO/s 12.97 secs/100000 ios 00:14:22.466 SPDK bdev Controller (SPDK1 ) core 2: 9999.67 IO/s 10.00 secs/100000 ios 00:14:22.466 SPDK bdev Controller (SPDK1 ) core 3: 8316.33 IO/s 12.02 secs/100000 ios 00:14:22.466 ======================================================== 00:14:22.466 00:14:22.466 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:22.466 [2024-11-19 11:25:36.213251] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:22.722 Initializing NVMe Controllers 00:14:22.722 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:22.722 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:22.722 Namespace ID: 1 size: 0GB 00:14:22.722 Initialization complete. 00:14:22.722 INFO: using host memory buffer for IO 00:14:22.722 Hello world! 00:14:22.722 [2024-11-19 11:25:36.247480] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:22.722 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:22.979 [2024-11-19 11:25:36.529367] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:23.912 Initializing NVMe Controllers 00:14:23.912 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:23.912 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:23.912 Initialization complete. Launching workers. 00:14:23.912 submit (in ns) avg, min, max = 7435.0, 3231.3, 3999859.1 00:14:23.912 complete (in ns) avg, min, max = 20345.7, 1781.7, 3998424.3 00:14:23.912 00:14:23.912 Submit histogram 00:14:23.912 ================ 00:14:23.912 Range in us Cumulative Count 00:14:23.912 3.228 - 3.242: 0.0062% ( 1) 00:14:23.912 3.242 - 3.256: 0.0185% ( 2) 00:14:23.912 3.256 - 3.270: 0.0308% ( 2) 00:14:23.912 3.270 - 3.283: 0.0800% ( 8) 00:14:23.912 3.283 - 3.297: 0.1846% ( 17) 00:14:23.912 3.297 - 3.311: 0.3016% ( 19) 00:14:23.912 3.311 - 3.325: 0.6032% ( 49) 00:14:23.912 3.325 - 3.339: 1.2740% ( 109) 00:14:23.912 3.339 - 3.353: 3.5881% ( 376) 00:14:23.912 3.353 - 3.367: 8.3026% ( 766) 00:14:23.912 3.367 - 3.381: 13.9279% ( 914) 00:14:23.912 3.381 - 3.395: 19.8117% ( 956) 00:14:23.912 3.395 - 3.409: 26.5756% ( 1099) 00:14:23.912 3.409 - 3.423: 32.2132% ( 916) 00:14:23.912 3.423 - 3.437: 37.2600% ( 820) 00:14:23.912 3.437 - 3.450: 43.1130% ( 951) 00:14:23.912 3.450 - 3.464: 47.6797% ( 742) 00:14:23.912 3.464 - 3.478: 51.8279% ( 674) 00:14:23.912 3.478 - 3.492: 56.7578% ( 801) 00:14:23.912 3.492 - 3.506: 63.5894% ( 1110) 00:14:23.912 3.506 - 3.520: 69.2885% ( 926) 00:14:23.912 3.520 - 3.534: 73.4244% ( 672) 00:14:23.912 3.534 - 3.548: 78.5327% ( 830) 00:14:23.912 3.548 - 3.562: 82.8902% ( 708) 00:14:23.912 3.562 - 3.590: 87.0753% ( 680) 00:14:23.912 3.590 - 3.617: 88.0724% ( 162) 00:14:23.912 3.617 - 3.645: 88.7802% ( 115) 00:14:23.912 3.645 - 3.673: 90.1403% ( 221) 00:14:23.912 3.673 - 3.701: 91.9990% ( 302) 00:14:23.912 3.701 - 3.729: 93.6115% ( 262) 00:14:23.912 3.729 - 3.757: 95.3471% ( 282) 00:14:23.912 3.757 - 3.784: 96.7627% ( 230) 00:14:23.912 3.784 - 3.812: 98.0305% ( 206) 00:14:23.912 3.812 - 3.840: 98.6337% ( 98) 00:14:23.912 3.840 - 3.868: 99.1384% ( 82) 00:14:23.912 3.868 - 3.896: 99.3784% ( 39) 00:14:23.912 3.896 - 3.923: 99.5323% ( 25) 00:14:23.912 3.923 - 3.951: 99.5569% ( 4) 00:14:23.912 3.951 - 3.979: 99.5815% ( 4) 00:14:23.912 3.979 - 4.007: 99.5938% ( 2) 00:14:23.912 4.035 - 4.063: 99.6000% ( 1) 00:14:23.912 5.037 - 5.064: 99.6061% ( 1) 00:14:23.912 5.176 - 5.203: 99.6123% ( 1) 00:14:23.912 5.370 - 5.398: 99.6184% ( 1) 00:14:23.912 5.398 - 5.426: 99.6246% ( 1) 00:14:23.912 5.537 - 5.565: 99.6307% ( 1) 00:14:23.912 5.565 - 5.593: 99.6369% ( 1) 00:14:23.912 5.593 - 5.621: 99.6430% ( 1) 00:14:23.912 5.677 - 5.704: 99.6492% ( 1) 00:14:23.912 5.732 - 5.760: 99.6553% ( 1) 00:14:23.912 5.760 - 5.788: 99.6615% ( 1) 00:14:23.912 5.788 - 5.816: 99.6677% ( 1) 00:14:23.912 5.843 - 5.871: 99.6738% ( 1) 00:14:23.912 6.066 - 6.094: 99.6861% ( 2) 00:14:23.912 6.094 - 6.122: 99.6923% ( 1) 00:14:23.912 6.150 - 6.177: 99.6984% ( 1) 00:14:23.912 6.261 - 6.289: 99.7046% ( 1) 00:14:23.912 6.344 - 6.372: 99.7107% ( 1) 00:14:23.912 6.400 - 6.428: 99.7169% ( 1) 00:14:23.912 6.428 - 6.456: 99.7230% ( 1) 00:14:23.912 6.483 - 6.511: 99.7292% ( 1) 00:14:23.912 6.511 - 6.539: 99.7477% ( 3) 00:14:23.912 6.567 - 6.595: 99.7538% ( 1) 00:14:23.913 6.706 - 6.734: 99.7600% ( 1) 00:14:23.913 6.901 - 6.929: 99.7661% ( 1) 00:14:23.913 6.929 - 6.957: 99.7723% ( 1) 00:14:23.913 6.984 - 7.012: 99.7784% ( 1) 00:14:23.913 7.012 - 7.040: 99.7846% ( 1) 00:14:23.913 7.040 - 7.068: 99.7907% ( 1) 00:14:23.913 7.123 - 7.179: 99.8092% ( 3) 00:14:23.913 7.179 - 7.235: 99.8154% ( 1) 00:14:23.913 7.235 - 7.290: 99.8215% ( 1) 00:14:23.913 7.624 - 7.680: 99.8277% ( 1) 00:14:23.913 7.680 - 7.736: 99.8461% ( 3) 00:14:23.913 7.847 - 7.903: 99.8584% ( 2) 00:14:23.913 8.014 - 8.070: 99.8646% ( 1) 00:14:23.913 8.403 - 8.459: 99.8708% ( 1) 00:14:23.913 8.515 - 8.570: 99.8831% ( 2) 00:14:23.913 8.849 - 8.904: 99.8892% ( 1) 00:14:23.913 [2024-11-19 11:25:37.553301] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:23.913 19.256 - 19.367: 99.8954% ( 1) 00:14:23.913 40.960 - 41.183: 99.9015% ( 1) 00:14:23.913 3989.148 - 4017.642: 100.0000% ( 16) 00:14:23.913 00:14:23.913 Complete histogram 00:14:23.913 ================== 00:14:23.913 Range in us Cumulative Count 00:14:23.913 1.781 - 1.795: 0.0246% ( 4) 00:14:23.913 1.795 - 1.809: 0.0492% ( 4) 00:14:23.913 1.809 - 1.823: 0.5908% ( 88) 00:14:23.913 1.823 - 1.837: 1.6617% ( 174) 00:14:23.913 1.837 - 1.850: 3.0527% ( 226) 00:14:23.913 1.850 - 1.864: 6.7824% ( 606) 00:14:23.913 1.864 - 1.878: 51.9387% ( 7337) 00:14:23.913 1.878 - 1.892: 84.4842% ( 5288) 00:14:23.913 1.892 - 1.906: 91.9867% ( 1219) 00:14:23.913 1.906 - 1.920: 94.2701% ( 371) 00:14:23.913 1.920 - 1.934: 94.9225% ( 106) 00:14:23.913 1.934 - 1.948: 96.4057% ( 241) 00:14:23.913 1.948 - 1.962: 98.1905% ( 290) 00:14:23.913 1.962 - 1.976: 99.0399% ( 138) 00:14:23.913 1.976 - 1.990: 99.2184% ( 29) 00:14:23.913 1.990 - 2.003: 99.2307% ( 2) 00:14:23.913 2.003 - 2.017: 99.2491% ( 3) 00:14:23.913 2.017 - 2.031: 99.2614% ( 2) 00:14:23.913 2.031 - 2.045: 99.2922% ( 5) 00:14:23.913 2.045 - 2.059: 99.3045% ( 2) 00:14:23.913 2.059 - 2.073: 99.3168% ( 2) 00:14:23.913 2.157 - 2.170: 99.3291% ( 2) 00:14:23.913 2.184 - 2.198: 99.3353% ( 1) 00:14:23.913 2.226 - 2.240: 99.3415% ( 1) 00:14:23.913 2.268 - 2.282: 99.3476% ( 1) 00:14:23.913 2.421 - 2.435: 99.3538% ( 1) 00:14:23.913 3.562 - 3.590: 99.3599% ( 1) 00:14:23.913 3.729 - 3.757: 99.3661% ( 1) 00:14:23.913 3.896 - 3.923: 99.3722% ( 1) 00:14:23.913 3.979 - 4.007: 99.3784% ( 1) 00:14:23.913 4.007 - 4.035: 99.3845% ( 1) 00:14:23.913 4.118 - 4.146: 99.3907% ( 1) 00:14:23.913 4.174 - 4.202: 99.3968% ( 1) 00:14:23.913 4.202 - 4.230: 99.4030% ( 1) 00:14:23.913 4.397 - 4.424: 99.4092% ( 1) 00:14:23.913 4.424 - 4.452: 99.4215% ( 2) 00:14:23.913 4.480 - 4.508: 99.4276% ( 1) 00:14:23.913 4.786 - 4.814: 99.4338% ( 1) 00:14:23.913 5.009 - 5.037: 99.4399% ( 1) 00:14:23.913 5.037 - 5.064: 99.4461% ( 1) 00:14:23.913 5.231 - 5.259: 99.4522% ( 1) 00:14:23.913 5.259 - 5.287: 99.4584% ( 1) 00:14:23.913 5.370 - 5.398: 99.4707% ( 2) 00:14:23.913 5.704 - 5.732: 99.4769% ( 1) 00:14:23.913 6.010 - 6.038: 99.4830% ( 1) 00:14:23.913 6.233 - 6.261: 99.4892% ( 1) 00:14:23.913 6.344 - 6.372: 99.4953% ( 1) 00:14:23.913 6.595 - 6.623: 99.5015% ( 1) 00:14:23.913 7.123 - 7.179: 99.5076% ( 1) 00:14:23.913 7.290 - 7.346: 99.5138% ( 1) 00:14:23.913 8.237 - 8.292: 99.5199% ( 1) 00:14:23.913 8.403 - 8.459: 99.5261% ( 1) 00:14:23.913 146.922 - 147.812: 99.5323% ( 1) 00:14:23.913 203.910 - 204.800: 99.5384% ( 1) 00:14:23.913 3989.148 - 4017.642: 100.0000% ( 75) 00:14:23.913 00:14:23.913 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:23.913 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:23.913 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:23.913 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:23.913 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:24.171 [ 00:14:24.171 { 00:14:24.171 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:24.171 "subtype": "Discovery", 00:14:24.171 "listen_addresses": [], 00:14:24.171 "allow_any_host": true, 00:14:24.171 "hosts": [] 00:14:24.171 }, 00:14:24.171 { 00:14:24.171 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:24.171 "subtype": "NVMe", 00:14:24.171 "listen_addresses": [ 00:14:24.171 { 00:14:24.171 "trtype": "VFIOUSER", 00:14:24.171 "adrfam": "IPv4", 00:14:24.171 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:24.171 "trsvcid": "0" 00:14:24.171 } 00:14:24.171 ], 00:14:24.171 "allow_any_host": true, 00:14:24.171 "hosts": [], 00:14:24.171 "serial_number": "SPDK1", 00:14:24.171 "model_number": "SPDK bdev Controller", 00:14:24.171 "max_namespaces": 32, 00:14:24.171 "min_cntlid": 1, 00:14:24.171 "max_cntlid": 65519, 00:14:24.171 "namespaces": [ 00:14:24.171 { 00:14:24.171 "nsid": 1, 00:14:24.171 "bdev_name": "Malloc1", 00:14:24.171 "name": "Malloc1", 00:14:24.171 "nguid": "617ED97D885B4C8D8B18821AD2145184", 00:14:24.171 "uuid": "617ed97d-885b-4c8d-8b18-821ad2145184" 00:14:24.171 } 00:14:24.171 ] 00:14:24.171 }, 00:14:24.171 { 00:14:24.171 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:24.171 "subtype": "NVMe", 00:14:24.171 "listen_addresses": [ 00:14:24.171 { 00:14:24.171 "trtype": "VFIOUSER", 00:14:24.171 "adrfam": "IPv4", 00:14:24.171 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:24.171 "trsvcid": "0" 00:14:24.171 } 00:14:24.171 ], 00:14:24.171 "allow_any_host": true, 00:14:24.171 "hosts": [], 00:14:24.171 "serial_number": "SPDK2", 00:14:24.171 "model_number": "SPDK bdev Controller", 00:14:24.171 "max_namespaces": 32, 00:14:24.171 "min_cntlid": 1, 00:14:24.171 "max_cntlid": 65519, 00:14:24.171 "namespaces": [ 00:14:24.171 { 00:14:24.171 "nsid": 1, 00:14:24.171 "bdev_name": "Malloc2", 00:14:24.171 "name": "Malloc2", 00:14:24.171 "nguid": "5D128A48775347D19E504A841A03182B", 00:14:24.171 "uuid": "5d128a48-7753-47d1-9e50-4a841a03182b" 00:14:24.171 } 00:14:24.171 ] 00:14:24.171 } 00:14:24.171 ] 00:14:24.171 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:24.171 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2227926 00:14:24.171 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:24.171 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:24.171 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:24.171 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:24.171 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:24.171 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:24.171 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:24.171 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:24.429 [2024-11-19 11:25:37.956594] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.429 Malloc3 00:14:24.429 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:24.429 [2024-11-19 11:25:38.182331] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:24.429 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:24.687 Asynchronous Event Request test 00:14:24.687 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.687 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.687 Registering asynchronous event callbacks... 00:14:24.687 Starting namespace attribute notice tests for all controllers... 00:14:24.687 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:24.687 aer_cb - Changed Namespace 00:14:24.687 Cleaning up... 00:14:24.687 [ 00:14:24.687 { 00:14:24.687 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:24.687 "subtype": "Discovery", 00:14:24.687 "listen_addresses": [], 00:14:24.687 "allow_any_host": true, 00:14:24.687 "hosts": [] 00:14:24.687 }, 00:14:24.687 { 00:14:24.687 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:24.687 "subtype": "NVMe", 00:14:24.687 "listen_addresses": [ 00:14:24.687 { 00:14:24.687 "trtype": "VFIOUSER", 00:14:24.687 "adrfam": "IPv4", 00:14:24.688 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:24.688 "trsvcid": "0" 00:14:24.688 } 00:14:24.688 ], 00:14:24.688 "allow_any_host": true, 00:14:24.688 "hosts": [], 00:14:24.688 "serial_number": "SPDK1", 00:14:24.688 "model_number": "SPDK bdev Controller", 00:14:24.688 "max_namespaces": 32, 00:14:24.688 "min_cntlid": 1, 00:14:24.688 "max_cntlid": 65519, 00:14:24.688 "namespaces": [ 00:14:24.688 { 00:14:24.688 "nsid": 1, 00:14:24.688 "bdev_name": "Malloc1", 00:14:24.688 "name": "Malloc1", 00:14:24.688 "nguid": "617ED97D885B4C8D8B18821AD2145184", 00:14:24.688 "uuid": "617ed97d-885b-4c8d-8b18-821ad2145184" 00:14:24.688 }, 00:14:24.688 { 00:14:24.688 "nsid": 2, 00:14:24.688 "bdev_name": "Malloc3", 00:14:24.688 "name": "Malloc3", 00:14:24.688 "nguid": "6D4107FF097740B08BD22DD3E0C549E8", 00:14:24.688 "uuid": "6d4107ff-0977-40b0-8bd2-2dd3e0c549e8" 00:14:24.688 } 00:14:24.688 ] 00:14:24.688 }, 00:14:24.688 { 00:14:24.688 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:24.688 "subtype": "NVMe", 00:14:24.688 "listen_addresses": [ 00:14:24.688 { 00:14:24.688 "trtype": "VFIOUSER", 00:14:24.688 "adrfam": "IPv4", 00:14:24.688 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:24.688 "trsvcid": "0" 00:14:24.688 } 00:14:24.688 ], 00:14:24.688 "allow_any_host": true, 00:14:24.688 "hosts": [], 00:14:24.688 "serial_number": "SPDK2", 00:14:24.688 "model_number": "SPDK bdev Controller", 00:14:24.688 "max_namespaces": 32, 00:14:24.688 "min_cntlid": 1, 00:14:24.688 "max_cntlid": 65519, 00:14:24.688 "namespaces": [ 00:14:24.688 { 00:14:24.688 "nsid": 1, 00:14:24.688 "bdev_name": "Malloc2", 00:14:24.688 "name": "Malloc2", 00:14:24.688 "nguid": "5D128A48775347D19E504A841A03182B", 00:14:24.688 "uuid": "5d128a48-7753-47d1-9e50-4a841a03182b" 00:14:24.688 } 00:14:24.688 ] 00:14:24.688 } 00:14:24.688 ] 00:14:24.688 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2227926 00:14:24.688 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:24.688 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:24.688 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:24.688 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:24.688 [2024-11-19 11:25:38.418577] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:24.688 [2024-11-19 11:25:38.418625] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227942 ] 00:14:24.688 [2024-11-19 11:25:38.457736] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:24.688 [2024-11-19 11:25:38.461976] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:24.688 [2024-11-19 11:25:38.462000] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f89eb3cd000 00:14:24.688 [2024-11-19 11:25:38.462976] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:24.688 [2024-11-19 11:25:38.463979] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:24.688 [2024-11-19 11:25:38.464992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:24.688 [2024-11-19 11:25:38.465996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:24.948 [2024-11-19 11:25:38.467007] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:24.948 [2024-11-19 11:25:38.468013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:24.948 [2024-11-19 11:25:38.469016] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:24.948 [2024-11-19 11:25:38.470025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:24.948 [2024-11-19 11:25:38.471032] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:24.948 [2024-11-19 11:25:38.471042] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f89eb3c2000 00:14:24.948 [2024-11-19 11:25:38.471983] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:24.948 [2024-11-19 11:25:38.486394] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:24.948 [2024-11-19 11:25:38.486418] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:24.948 [2024-11-19 11:25:38.488463] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:24.948 [2024-11-19 11:25:38.488502] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:24.948 [2024-11-19 11:25:38.488571] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:24.948 [2024-11-19 11:25:38.488584] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:24.948 [2024-11-19 11:25:38.488589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:24.948 [2024-11-19 11:25:38.489471] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:24.948 [2024-11-19 11:25:38.489480] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:24.948 [2024-11-19 11:25:38.489487] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:24.948 [2024-11-19 11:25:38.490480] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:24.949 [2024-11-19 11:25:38.490492] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:24.949 [2024-11-19 11:25:38.490499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:24.949 [2024-11-19 11:25:38.491490] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:24.949 [2024-11-19 11:25:38.491499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:24.949 [2024-11-19 11:25:38.492495] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:24.949 [2024-11-19 11:25:38.492504] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:24.949 [2024-11-19 11:25:38.492509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:24.949 [2024-11-19 11:25:38.492515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:24.949 [2024-11-19 11:25:38.492623] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:24.949 [2024-11-19 11:25:38.492627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:24.949 [2024-11-19 11:25:38.492632] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:24.949 [2024-11-19 11:25:38.493498] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:24.949 [2024-11-19 11:25:38.494501] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:24.949 [2024-11-19 11:25:38.495508] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:24.949 [2024-11-19 11:25:38.496511] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:24.949 [2024-11-19 11:25:38.496551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:24.949 [2024-11-19 11:25:38.497529] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:24.949 [2024-11-19 11:25:38.497538] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:24.949 [2024-11-19 11:25:38.497543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.497560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:24.949 [2024-11-19 11:25:38.497567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.497579] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:24.949 [2024-11-19 11:25:38.497584] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:24.949 [2024-11-19 11:25:38.497588] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:24.949 [2024-11-19 11:25:38.497599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:24.949 [2024-11-19 11:25:38.507954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:24.949 [2024-11-19 11:25:38.507966] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:24.949 [2024-11-19 11:25:38.507971] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:24.949 [2024-11-19 11:25:38.507975] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:24.949 [2024-11-19 11:25:38.507979] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:24.949 [2024-11-19 11:25:38.507986] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:24.949 [2024-11-19 11:25:38.507991] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:24.949 [2024-11-19 11:25:38.507995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.508004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.508014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:24.949 [2024-11-19 11:25:38.515953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:24.949 [2024-11-19 11:25:38.515964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.949 [2024-11-19 11:25:38.515972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.949 [2024-11-19 11:25:38.515979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.949 [2024-11-19 11:25:38.515987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.949 [2024-11-19 11:25:38.515991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.515998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.516006] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:24.949 [2024-11-19 11:25:38.523955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:24.949 [2024-11-19 11:25:38.523964] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:24.949 [2024-11-19 11:25:38.523969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.523976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.523981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.523989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:24.949 [2024-11-19 11:25:38.531952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:24.949 [2024-11-19 11:25:38.532010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.532018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.532025] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:24.949 [2024-11-19 11:25:38.532029] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:24.949 [2024-11-19 11:25:38.532033] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:24.949 [2024-11-19 11:25:38.532039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:24.949 [2024-11-19 11:25:38.539953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:24.949 [2024-11-19 11:25:38.539964] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:24.949 [2024-11-19 11:25:38.539975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.539983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.539989] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:24.949 [2024-11-19 11:25:38.539993] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:24.949 [2024-11-19 11:25:38.539996] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:24.949 [2024-11-19 11:25:38.540002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:24.949 [2024-11-19 11:25:38.547953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:24.949 [2024-11-19 11:25:38.547966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.547974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.547980] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:24.949 [2024-11-19 11:25:38.547984] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:24.949 [2024-11-19 11:25:38.547987] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:24.949 [2024-11-19 11:25:38.547993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:24.949 [2024-11-19 11:25:38.559952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:24.949 [2024-11-19 11:25:38.559961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.559968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.559975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:24.949 [2024-11-19 11:25:38.559981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:24.950 [2024-11-19 11:25:38.559988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:24.950 [2024-11-19 11:25:38.559993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:24.950 [2024-11-19 11:25:38.559997] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:24.950 [2024-11-19 11:25:38.560002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:24.950 [2024-11-19 11:25:38.560007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:24.950 [2024-11-19 11:25:38.560024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:24.950 [2024-11-19 11:25:38.567953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:24.950 [2024-11-19 11:25:38.567965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:24.950 [2024-11-19 11:25:38.575952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:24.950 [2024-11-19 11:25:38.575964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:24.950 [2024-11-19 11:25:38.583954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:24.950 [2024-11-19 11:25:38.583967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:24.950 [2024-11-19 11:25:38.591954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:24.950 [2024-11-19 11:25:38.591969] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:24.950 [2024-11-19 11:25:38.591974] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:24.950 [2024-11-19 11:25:38.591977] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:24.950 [2024-11-19 11:25:38.591980] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:24.950 [2024-11-19 11:25:38.591984] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:24.950 [2024-11-19 11:25:38.591990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:24.950 [2024-11-19 11:25:38.591997] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:24.950 [2024-11-19 11:25:38.592001] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:24.950 [2024-11-19 11:25:38.592004] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:24.950 [2024-11-19 11:25:38.592009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:24.950 [2024-11-19 11:25:38.592016] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:24.950 [2024-11-19 11:25:38.592020] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:24.950 [2024-11-19 11:25:38.592023] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:24.950 [2024-11-19 11:25:38.592028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:24.950 [2024-11-19 11:25:38.592038] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:24.950 [2024-11-19 11:25:38.592042] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:24.950 [2024-11-19 11:25:38.592045] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:24.950 [2024-11-19 11:25:38.592050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:24.950 [2024-11-19 11:25:38.599955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:24.950 [2024-11-19 11:25:38.599969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:24.950 [2024-11-19 11:25:38.599990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:24.950 [2024-11-19 11:25:38.599997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:24.950 ===================================================== 00:14:24.950 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:24.950 ===================================================== 00:14:24.950 Controller Capabilities/Features 00:14:24.950 ================================ 00:14:24.950 Vendor ID: 4e58 00:14:24.950 Subsystem Vendor ID: 4e58 00:14:24.950 Serial Number: SPDK2 00:14:24.950 Model Number: SPDK bdev Controller 00:14:24.950 Firmware Version: 25.01 00:14:24.950 Recommended Arb Burst: 6 00:14:24.950 IEEE OUI Identifier: 8d 6b 50 00:14:24.950 Multi-path I/O 00:14:24.950 May have multiple subsystem ports: Yes 00:14:24.950 May have multiple controllers: Yes 00:14:24.950 Associated with SR-IOV VF: No 00:14:24.950 Max Data Transfer Size: 131072 00:14:24.950 Max Number of Namespaces: 32 00:14:24.950 Max Number of I/O Queues: 127 00:14:24.950 NVMe Specification Version (VS): 1.3 00:14:24.950 NVMe Specification Version (Identify): 1.3 00:14:24.950 Maximum Queue Entries: 256 00:14:24.950 Contiguous Queues Required: Yes 00:14:24.950 Arbitration Mechanisms Supported 00:14:24.950 Weighted Round Robin: Not Supported 00:14:24.950 Vendor Specific: Not Supported 00:14:24.950 Reset Timeout: 15000 ms 00:14:24.950 Doorbell Stride: 4 bytes 00:14:24.950 NVM Subsystem Reset: Not Supported 00:14:24.950 Command Sets Supported 00:14:24.950 NVM Command Set: Supported 00:14:24.950 Boot Partition: Not Supported 00:14:24.950 Memory Page Size Minimum: 4096 bytes 00:14:24.950 Memory Page Size Maximum: 4096 bytes 00:14:24.950 Persistent Memory Region: Not Supported 00:14:24.950 Optional Asynchronous Events Supported 00:14:24.950 Namespace Attribute Notices: Supported 00:14:24.950 Firmware Activation Notices: Not Supported 00:14:24.950 ANA Change Notices: Not Supported 00:14:24.950 PLE Aggregate Log Change Notices: Not Supported 00:14:24.950 LBA Status Info Alert Notices: Not Supported 00:14:24.950 EGE Aggregate Log Change Notices: Not Supported 00:14:24.950 Normal NVM Subsystem Shutdown event: Not Supported 00:14:24.950 Zone Descriptor Change Notices: Not Supported 00:14:24.950 Discovery Log Change Notices: Not Supported 00:14:24.950 Controller Attributes 00:14:24.950 128-bit Host Identifier: Supported 00:14:24.950 Non-Operational Permissive Mode: Not Supported 00:14:24.950 NVM Sets: Not Supported 00:14:24.950 Read Recovery Levels: Not Supported 00:14:24.950 Endurance Groups: Not Supported 00:14:24.950 Predictable Latency Mode: Not Supported 00:14:24.950 Traffic Based Keep ALive: Not Supported 00:14:24.950 Namespace Granularity: Not Supported 00:14:24.950 SQ Associations: Not Supported 00:14:24.950 UUID List: Not Supported 00:14:24.950 Multi-Domain Subsystem: Not Supported 00:14:24.950 Fixed Capacity Management: Not Supported 00:14:24.950 Variable Capacity Management: Not Supported 00:14:24.950 Delete Endurance Group: Not Supported 00:14:24.950 Delete NVM Set: Not Supported 00:14:24.950 Extended LBA Formats Supported: Not Supported 00:14:24.950 Flexible Data Placement Supported: Not Supported 00:14:24.950 00:14:24.950 Controller Memory Buffer Support 00:14:24.950 ================================ 00:14:24.950 Supported: No 00:14:24.950 00:14:24.950 Persistent Memory Region Support 00:14:24.950 ================================ 00:14:24.950 Supported: No 00:14:24.950 00:14:24.950 Admin Command Set Attributes 00:14:24.950 ============================ 00:14:24.950 Security Send/Receive: Not Supported 00:14:24.950 Format NVM: Not Supported 00:14:24.950 Firmware Activate/Download: Not Supported 00:14:24.950 Namespace Management: Not Supported 00:14:24.950 Device Self-Test: Not Supported 00:14:24.950 Directives: Not Supported 00:14:24.950 NVMe-MI: Not Supported 00:14:24.950 Virtualization Management: Not Supported 00:14:24.950 Doorbell Buffer Config: Not Supported 00:14:24.950 Get LBA Status Capability: Not Supported 00:14:24.950 Command & Feature Lockdown Capability: Not Supported 00:14:24.950 Abort Command Limit: 4 00:14:24.950 Async Event Request Limit: 4 00:14:24.950 Number of Firmware Slots: N/A 00:14:24.950 Firmware Slot 1 Read-Only: N/A 00:14:24.950 Firmware Activation Without Reset: N/A 00:14:24.950 Multiple Update Detection Support: N/A 00:14:24.950 Firmware Update Granularity: No Information Provided 00:14:24.950 Per-Namespace SMART Log: No 00:14:24.950 Asymmetric Namespace Access Log Page: Not Supported 00:14:24.950 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:24.950 Command Effects Log Page: Supported 00:14:24.950 Get Log Page Extended Data: Supported 00:14:24.950 Telemetry Log Pages: Not Supported 00:14:24.950 Persistent Event Log Pages: Not Supported 00:14:24.950 Supported Log Pages Log Page: May Support 00:14:24.950 Commands Supported & Effects Log Page: Not Supported 00:14:24.950 Feature Identifiers & Effects Log Page:May Support 00:14:24.950 NVMe-MI Commands & Effects Log Page: May Support 00:14:24.950 Data Area 4 for Telemetry Log: Not Supported 00:14:24.950 Error Log Page Entries Supported: 128 00:14:24.950 Keep Alive: Supported 00:14:24.950 Keep Alive Granularity: 10000 ms 00:14:24.950 00:14:24.950 NVM Command Set Attributes 00:14:24.951 ========================== 00:14:24.951 Submission Queue Entry Size 00:14:24.951 Max: 64 00:14:24.951 Min: 64 00:14:24.951 Completion Queue Entry Size 00:14:24.951 Max: 16 00:14:24.951 Min: 16 00:14:24.951 Number of Namespaces: 32 00:14:24.951 Compare Command: Supported 00:14:24.951 Write Uncorrectable Command: Not Supported 00:14:24.951 Dataset Management Command: Supported 00:14:24.951 Write Zeroes Command: Supported 00:14:24.951 Set Features Save Field: Not Supported 00:14:24.951 Reservations: Not Supported 00:14:24.951 Timestamp: Not Supported 00:14:24.951 Copy: Supported 00:14:24.951 Volatile Write Cache: Present 00:14:24.951 Atomic Write Unit (Normal): 1 00:14:24.951 Atomic Write Unit (PFail): 1 00:14:24.951 Atomic Compare & Write Unit: 1 00:14:24.951 Fused Compare & Write: Supported 00:14:24.951 Scatter-Gather List 00:14:24.951 SGL Command Set: Supported (Dword aligned) 00:14:24.951 SGL Keyed: Not Supported 00:14:24.951 SGL Bit Bucket Descriptor: Not Supported 00:14:24.951 SGL Metadata Pointer: Not Supported 00:14:24.951 Oversized SGL: Not Supported 00:14:24.951 SGL Metadata Address: Not Supported 00:14:24.951 SGL Offset: Not Supported 00:14:24.951 Transport SGL Data Block: Not Supported 00:14:24.951 Replay Protected Memory Block: Not Supported 00:14:24.951 00:14:24.951 Firmware Slot Information 00:14:24.951 ========================= 00:14:24.951 Active slot: 1 00:14:24.951 Slot 1 Firmware Revision: 25.01 00:14:24.951 00:14:24.951 00:14:24.951 Commands Supported and Effects 00:14:24.951 ============================== 00:14:24.951 Admin Commands 00:14:24.951 -------------- 00:14:24.951 Get Log Page (02h): Supported 00:14:24.951 Identify (06h): Supported 00:14:24.951 Abort (08h): Supported 00:14:24.951 Set Features (09h): Supported 00:14:24.951 Get Features (0Ah): Supported 00:14:24.951 Asynchronous Event Request (0Ch): Supported 00:14:24.951 Keep Alive (18h): Supported 00:14:24.951 I/O Commands 00:14:24.951 ------------ 00:14:24.951 Flush (00h): Supported LBA-Change 00:14:24.951 Write (01h): Supported LBA-Change 00:14:24.951 Read (02h): Supported 00:14:24.951 Compare (05h): Supported 00:14:24.951 Write Zeroes (08h): Supported LBA-Change 00:14:24.951 Dataset Management (09h): Supported LBA-Change 00:14:24.951 Copy (19h): Supported LBA-Change 00:14:24.951 00:14:24.951 Error Log 00:14:24.951 ========= 00:14:24.951 00:14:24.951 Arbitration 00:14:24.951 =========== 00:14:24.951 Arbitration Burst: 1 00:14:24.951 00:14:24.951 Power Management 00:14:24.951 ================ 00:14:24.951 Number of Power States: 1 00:14:24.951 Current Power State: Power State #0 00:14:24.951 Power State #0: 00:14:24.951 Max Power: 0.00 W 00:14:24.951 Non-Operational State: Operational 00:14:24.951 Entry Latency: Not Reported 00:14:24.951 Exit Latency: Not Reported 00:14:24.951 Relative Read Throughput: 0 00:14:24.951 Relative Read Latency: 0 00:14:24.951 Relative Write Throughput: 0 00:14:24.951 Relative Write Latency: 0 00:14:24.951 Idle Power: Not Reported 00:14:24.951 Active Power: Not Reported 00:14:24.951 Non-Operational Permissive Mode: Not Supported 00:14:24.951 00:14:24.951 Health Information 00:14:24.951 ================== 00:14:24.951 Critical Warnings: 00:14:24.951 Available Spare Space: OK 00:14:24.951 Temperature: OK 00:14:24.951 Device Reliability: OK 00:14:24.951 Read Only: No 00:14:24.951 Volatile Memory Backup: OK 00:14:24.951 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:24.951 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:24.951 Available Spare: 0% 00:14:24.951 Available Sp[2024-11-19 11:25:38.600089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:24.951 [2024-11-19 11:25:38.611956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:24.951 [2024-11-19 11:25:38.611988] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:24.951 [2024-11-19 11:25:38.611999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.951 [2024-11-19 11:25:38.612005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.951 [2024-11-19 11:25:38.612011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.951 [2024-11-19 11:25:38.612016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.951 [2024-11-19 11:25:38.612069] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:24.951 [2024-11-19 11:25:38.612081] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:24.951 [2024-11-19 11:25:38.613072] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.951 [2024-11-19 11:25:38.613117] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:24.951 [2024-11-19 11:25:38.613124] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:24.951 [2024-11-19 11:25:38.614071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:24.951 [2024-11-19 11:25:38.614084] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:24.951 [2024-11-19 11:25:38.614130] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:24.951 [2024-11-19 11:25:38.615113] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:24.951 are Threshold: 0% 00:14:24.951 Life Percentage Used: 0% 00:14:24.951 Data Units Read: 0 00:14:24.951 Data Units Written: 0 00:14:24.951 Host Read Commands: 0 00:14:24.951 Host Write Commands: 0 00:14:24.951 Controller Busy Time: 0 minutes 00:14:24.951 Power Cycles: 0 00:14:24.951 Power On Hours: 0 hours 00:14:24.951 Unsafe Shutdowns: 0 00:14:24.951 Unrecoverable Media Errors: 0 00:14:24.951 Lifetime Error Log Entries: 0 00:14:24.951 Warning Temperature Time: 0 minutes 00:14:24.951 Critical Temperature Time: 0 minutes 00:14:24.951 00:14:24.951 Number of Queues 00:14:24.951 ================ 00:14:24.951 Number of I/O Submission Queues: 127 00:14:24.951 Number of I/O Completion Queues: 127 00:14:24.951 00:14:24.951 Active Namespaces 00:14:24.951 ================= 00:14:24.951 Namespace ID:1 00:14:24.951 Error Recovery Timeout: Unlimited 00:14:24.951 Command Set Identifier: NVM (00h) 00:14:24.951 Deallocate: Supported 00:14:24.951 Deallocated/Unwritten Error: Not Supported 00:14:24.951 Deallocated Read Value: Unknown 00:14:24.951 Deallocate in Write Zeroes: Not Supported 00:14:24.951 Deallocated Guard Field: 0xFFFF 00:14:24.951 Flush: Supported 00:14:24.951 Reservation: Supported 00:14:24.951 Namespace Sharing Capabilities: Multiple Controllers 00:14:24.951 Size (in LBAs): 131072 (0GiB) 00:14:24.951 Capacity (in LBAs): 131072 (0GiB) 00:14:24.951 Utilization (in LBAs): 131072 (0GiB) 00:14:24.951 NGUID: 5D128A48775347D19E504A841A03182B 00:14:24.951 UUID: 5d128a48-7753-47d1-9e50-4a841a03182b 00:14:24.951 Thin Provisioning: Not Supported 00:14:24.951 Per-NS Atomic Units: Yes 00:14:24.951 Atomic Boundary Size (Normal): 0 00:14:24.951 Atomic Boundary Size (PFail): 0 00:14:24.951 Atomic Boundary Offset: 0 00:14:24.951 Maximum Single Source Range Length: 65535 00:14:24.951 Maximum Copy Length: 65535 00:14:24.951 Maximum Source Range Count: 1 00:14:24.951 NGUID/EUI64 Never Reused: No 00:14:24.951 Namespace Write Protected: No 00:14:24.951 Number of LBA Formats: 1 00:14:24.951 Current LBA Format: LBA Format #00 00:14:24.951 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:24.951 00:14:24.951 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:25.210 [2024-11-19 11:25:38.840392] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:30.475 Initializing NVMe Controllers 00:14:30.475 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:30.475 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:30.475 Initialization complete. Launching workers. 00:14:30.475 ======================================================== 00:14:30.475 Latency(us) 00:14:30.475 Device Information : IOPS MiB/s Average min max 00:14:30.475 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39947.58 156.05 3203.79 971.79 8116.36 00:14:30.475 ======================================================== 00:14:30.475 Total : 39947.58 156.05 3203.79 971.79 8116.36 00:14:30.475 00:14:30.475 [2024-11-19 11:25:43.944215] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:30.475 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:30.475 [2024-11-19 11:25:44.173882] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:35.739 Initializing NVMe Controllers 00:14:35.739 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:35.739 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:35.739 Initialization complete. Launching workers. 00:14:35.739 ======================================================== 00:14:35.739 Latency(us) 00:14:35.739 Device Information : IOPS MiB/s Average min max 00:14:35.739 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39810.77 155.51 3214.79 967.69 10577.79 00:14:35.739 ======================================================== 00:14:35.739 Total : 39810.77 155.51 3214.79 967.69 10577.79 00:14:35.739 00:14:35.739 [2024-11-19 11:25:49.193576] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:35.739 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:35.739 [2024-11-19 11:25:49.407150] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:41.003 [2024-11-19 11:25:54.544041] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:41.003 Initializing NVMe Controllers 00:14:41.003 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:41.003 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:41.003 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:41.003 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:41.003 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:41.003 Initialization complete. Launching workers. 00:14:41.003 Starting thread on core 2 00:14:41.003 Starting thread on core 3 00:14:41.003 Starting thread on core 1 00:14:41.003 11:25:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:41.262 [2024-11-19 11:25:54.849438] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:44.546 [2024-11-19 11:25:57.938483] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:44.546 Initializing NVMe Controllers 00:14:44.546 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:44.546 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:44.546 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:44.546 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:44.546 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:44.546 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:44.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:44.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:44.546 Initialization complete. Launching workers. 00:14:44.546 Starting thread on core 1 with urgent priority queue 00:14:44.546 Starting thread on core 2 with urgent priority queue 00:14:44.546 Starting thread on core 3 with urgent priority queue 00:14:44.546 Starting thread on core 0 with urgent priority queue 00:14:44.546 SPDK bdev Controller (SPDK2 ) core 0: 5860.67 IO/s 17.06 secs/100000 ios 00:14:44.546 SPDK bdev Controller (SPDK2 ) core 1: 4938.00 IO/s 20.25 secs/100000 ios 00:14:44.546 SPDK bdev Controller (SPDK2 ) core 2: 4047.00 IO/s 24.71 secs/100000 ios 00:14:44.546 SPDK bdev Controller (SPDK2 ) core 3: 4657.33 IO/s 21.47 secs/100000 ios 00:14:44.546 ======================================================== 00:14:44.546 00:14:44.546 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:44.546 [2024-11-19 11:25:58.229384] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:44.546 Initializing NVMe Controllers 00:14:44.546 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:44.546 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:44.546 Namespace ID: 1 size: 0GB 00:14:44.546 Initialization complete. 00:14:44.546 INFO: using host memory buffer for IO 00:14:44.546 Hello world! 00:14:44.546 [2024-11-19 11:25:58.239449] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:44.546 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:44.804 [2024-11-19 11:25:58.529892] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:46.177 Initializing NVMe Controllers 00:14:46.177 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:46.177 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:46.177 Initialization complete. Launching workers. 00:14:46.177 submit (in ns) avg, min, max = 6942.7, 3264.3, 3998526.1 00:14:46.177 complete (in ns) avg, min, max = 20702.2, 1810.4, 4000687.0 00:14:46.177 00:14:46.177 Submit histogram 00:14:46.177 ================ 00:14:46.177 Range in us Cumulative Count 00:14:46.177 3.256 - 3.270: 0.0062% ( 1) 00:14:46.177 3.270 - 3.283: 0.0620% ( 9) 00:14:46.177 3.283 - 3.297: 0.3411% ( 45) 00:14:46.177 3.297 - 3.311: 2.0216% ( 271) 00:14:46.177 3.311 - 3.325: 5.3516% ( 537) 00:14:46.177 3.325 - 3.339: 8.8863% ( 570) 00:14:46.177 3.339 - 3.353: 13.3139% ( 714) 00:14:46.177 3.353 - 3.367: 19.3476% ( 973) 00:14:46.177 3.367 - 3.381: 24.7985% ( 879) 00:14:46.177 3.381 - 3.395: 30.3919% ( 902) 00:14:46.177 3.395 - 3.409: 36.1900% ( 935) 00:14:46.177 3.409 - 3.423: 40.9463% ( 767) 00:14:46.177 3.423 - 3.437: 45.4794% ( 731) 00:14:46.177 3.437 - 3.450: 50.3597% ( 787) 00:14:46.177 3.450 - 3.464: 56.3252% ( 962) 00:14:46.177 3.464 - 3.478: 61.4163% ( 821) 00:14:46.177 3.478 - 3.492: 65.3603% ( 636) 00:14:46.177 3.492 - 3.506: 71.6297% ( 1011) 00:14:46.177 3.506 - 3.520: 76.5286% ( 790) 00:14:46.177 3.520 - 3.534: 79.8400% ( 534) 00:14:46.177 3.534 - 3.548: 82.8352% ( 483) 00:14:46.177 3.548 - 3.562: 85.0242% ( 353) 00:14:46.177 3.562 - 3.590: 87.1264% ( 339) 00:14:46.177 3.590 - 3.617: 88.2612% ( 183) 00:14:46.177 3.617 - 3.645: 89.5138% ( 202) 00:14:46.177 3.645 - 3.673: 91.1633% ( 266) 00:14:46.177 3.673 - 3.701: 92.7756% ( 260) 00:14:46.177 3.701 - 3.729: 94.3321% ( 251) 00:14:46.177 3.729 - 3.757: 96.0995% ( 285) 00:14:46.177 3.757 - 3.784: 97.5443% ( 233) 00:14:46.177 3.784 - 3.812: 98.3815% ( 135) 00:14:46.177 3.812 - 3.840: 98.9768% ( 96) 00:14:46.177 3.840 - 3.868: 99.2869% ( 50) 00:14:46.177 3.868 - 3.896: 99.5101% ( 36) 00:14:46.177 3.896 - 3.923: 99.5783% ( 11) 00:14:46.177 3.923 - 3.951: 99.5907% ( 2) 00:14:46.177 3.951 - 3.979: 99.6031% ( 2) 00:14:46.177 4.953 - 4.981: 99.6093% ( 1) 00:14:46.177 5.092 - 5.120: 99.6155% ( 1) 00:14:46.177 5.259 - 5.287: 99.6279% ( 2) 00:14:46.177 5.398 - 5.426: 99.6403% ( 2) 00:14:46.177 5.426 - 5.454: 99.6465% ( 1) 00:14:46.177 5.454 - 5.482: 99.6527% ( 1) 00:14:46.177 5.482 - 5.510: 99.6589% ( 1) 00:14:46.177 5.593 - 5.621: 99.6651% ( 1) 00:14:46.177 5.649 - 5.677: 99.6713% ( 1) 00:14:46.177 5.732 - 5.760: 99.6837% ( 2) 00:14:46.177 5.843 - 5.871: 99.6899% ( 1) 00:14:46.177 5.871 - 5.899: 99.7023% ( 2) 00:14:46.177 5.899 - 5.927: 99.7085% ( 1) 00:14:46.177 5.927 - 5.955: 99.7271% ( 3) 00:14:46.177 5.955 - 5.983: 99.7333% ( 1) 00:14:46.177 5.983 - 6.010: 99.7396% ( 1) 00:14:46.177 6.066 - 6.094: 99.7458% ( 1) 00:14:46.177 6.094 - 6.122: 99.7520% ( 1) 00:14:46.177 6.150 - 6.177: 99.7768% ( 4) 00:14:46.177 6.177 - 6.205: 99.7830% ( 1) 00:14:46.177 6.233 - 6.261: 99.7892% ( 1) 00:14:46.177 6.261 - 6.289: 99.7954% ( 1) 00:14:46.177 6.289 - 6.317: 99.8016% ( 1) 00:14:46.177 6.344 - 6.372: 99.8078% ( 1) 00:14:46.177 6.372 - 6.400: 99.8140% ( 1) 00:14:46.177 6.400 - 6.428: 99.8202% ( 1) 00:14:46.177 6.483 - 6.511: 99.8264% ( 1) 00:14:46.177 6.595 - 6.623: 99.8326% ( 1) 00:14:46.177 6.650 - 6.678: 99.8388% ( 1) 00:14:46.177 6.678 - 6.706: 99.8450% ( 1) 00:14:46.177 6.817 - 6.845: 99.8574% ( 2) 00:14:46.177 6.929 - 6.957: 99.8636% ( 1) 00:14:46.177 6.957 - 6.984: 99.8760% ( 2) 00:14:46.177 7.040 - 7.068: 99.8822% ( 1) 00:14:46.177 7.290 - 7.346: 99.8884% ( 1) 00:14:46.177 7.402 - 7.457: 99.9008% ( 2) 00:14:46.177 7.513 - 7.569: 99.9070% ( 1) 00:14:46.177 9.238 - 9.294: 99.9132% ( 1) 00:14:46.177 3989.148 - 4017.642: 100.0000% ( 14) 00:14:46.177 00:14:46.177 Complete histogram 00:14:46.177 ================== 00:14:46.177 Range in us Cumulative Count 00:14:46.177 1.809 - 1.823: 0.0992% ( 16) 00:14:46.177 1.823 - [2024-11-19 11:25:59.624008] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:46.177 1.837: 1.1100% ( 163) 00:14:46.177 1.837 - 1.850: 2.4867% ( 222) 00:14:46.177 1.850 - 1.864: 18.1322% ( 2523) 00:14:46.177 1.864 - 1.878: 74.6992% ( 9122) 00:14:46.177 1.878 - 1.892: 91.1261% ( 2649) 00:14:46.177 1.892 - 1.906: 95.2437% ( 664) 00:14:46.177 1.906 - 1.920: 96.3103% ( 172) 00:14:46.177 1.920 - 1.934: 97.1351% ( 133) 00:14:46.177 1.934 - 1.948: 98.1769% ( 168) 00:14:46.178 1.948 - 1.962: 98.9334% ( 122) 00:14:46.178 1.962 - 1.976: 99.1690% ( 38) 00:14:46.178 1.976 - 1.990: 99.2311% ( 10) 00:14:46.178 1.990 - 2.003: 99.2745% ( 7) 00:14:46.178 2.003 - 2.017: 99.2993% ( 4) 00:14:46.178 2.017 - 2.031: 99.3241% ( 4) 00:14:46.178 2.031 - 2.045: 99.3427% ( 3) 00:14:46.178 2.045 - 2.059: 99.3551% ( 2) 00:14:46.178 2.129 - 2.143: 99.3613% ( 1) 00:14:46.178 2.240 - 2.254: 99.3675% ( 1) 00:14:46.178 2.268 - 2.282: 99.3737% ( 1) 00:14:46.178 2.323 - 2.337: 99.3799% ( 1) 00:14:46.178 3.548 - 3.562: 99.3861% ( 1) 00:14:46.178 3.729 - 3.757: 99.3923% ( 1) 00:14:46.178 3.812 - 3.840: 99.4047% ( 2) 00:14:46.178 4.090 - 4.118: 99.4171% ( 2) 00:14:46.178 4.118 - 4.146: 99.4233% ( 1) 00:14:46.178 4.174 - 4.202: 99.4295% ( 1) 00:14:46.178 4.230 - 4.257: 99.4357% ( 1) 00:14:46.178 4.257 - 4.285: 99.4419% ( 1) 00:14:46.178 4.341 - 4.369: 99.4543% ( 2) 00:14:46.178 4.397 - 4.424: 99.4605% ( 1) 00:14:46.178 4.563 - 4.591: 99.4667% ( 1) 00:14:46.178 4.619 - 4.647: 99.4791% ( 2) 00:14:46.178 4.675 - 4.703: 99.4853% ( 1) 00:14:46.178 4.758 - 4.786: 99.4915% ( 1) 00:14:46.178 4.870 - 4.897: 99.4977% ( 1) 00:14:46.178 4.925 - 4.953: 99.5039% ( 1) 00:14:46.178 5.009 - 5.037: 99.5101% ( 1) 00:14:46.178 5.426 - 5.454: 99.5163% ( 1) 00:14:46.178 5.871 - 5.899: 99.5225% ( 1) 00:14:46.178 6.483 - 6.511: 99.5287% ( 1) 00:14:46.178 3989.148 - 4017.642: 100.0000% ( 76) 00:14:46.178 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:46.178 [ 00:14:46.178 { 00:14:46.178 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:46.178 "subtype": "Discovery", 00:14:46.178 "listen_addresses": [], 00:14:46.178 "allow_any_host": true, 00:14:46.178 "hosts": [] 00:14:46.178 }, 00:14:46.178 { 00:14:46.178 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:46.178 "subtype": "NVMe", 00:14:46.178 "listen_addresses": [ 00:14:46.178 { 00:14:46.178 "trtype": "VFIOUSER", 00:14:46.178 "adrfam": "IPv4", 00:14:46.178 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:46.178 "trsvcid": "0" 00:14:46.178 } 00:14:46.178 ], 00:14:46.178 "allow_any_host": true, 00:14:46.178 "hosts": [], 00:14:46.178 "serial_number": "SPDK1", 00:14:46.178 "model_number": "SPDK bdev Controller", 00:14:46.178 "max_namespaces": 32, 00:14:46.178 "min_cntlid": 1, 00:14:46.178 "max_cntlid": 65519, 00:14:46.178 "namespaces": [ 00:14:46.178 { 00:14:46.178 "nsid": 1, 00:14:46.178 "bdev_name": "Malloc1", 00:14:46.178 "name": "Malloc1", 00:14:46.178 "nguid": "617ED97D885B4C8D8B18821AD2145184", 00:14:46.178 "uuid": "617ed97d-885b-4c8d-8b18-821ad2145184" 00:14:46.178 }, 00:14:46.178 { 00:14:46.178 "nsid": 2, 00:14:46.178 "bdev_name": "Malloc3", 00:14:46.178 "name": "Malloc3", 00:14:46.178 "nguid": "6D4107FF097740B08BD22DD3E0C549E8", 00:14:46.178 "uuid": "6d4107ff-0977-40b0-8bd2-2dd3e0c549e8" 00:14:46.178 } 00:14:46.178 ] 00:14:46.178 }, 00:14:46.178 { 00:14:46.178 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:46.178 "subtype": "NVMe", 00:14:46.178 "listen_addresses": [ 00:14:46.178 { 00:14:46.178 "trtype": "VFIOUSER", 00:14:46.178 "adrfam": "IPv4", 00:14:46.178 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:46.178 "trsvcid": "0" 00:14:46.178 } 00:14:46.178 ], 00:14:46.178 "allow_any_host": true, 00:14:46.178 "hosts": [], 00:14:46.178 "serial_number": "SPDK2", 00:14:46.178 "model_number": "SPDK bdev Controller", 00:14:46.178 "max_namespaces": 32, 00:14:46.178 "min_cntlid": 1, 00:14:46.178 "max_cntlid": 65519, 00:14:46.178 "namespaces": [ 00:14:46.178 { 00:14:46.178 "nsid": 1, 00:14:46.178 "bdev_name": "Malloc2", 00:14:46.178 "name": "Malloc2", 00:14:46.178 "nguid": "5D128A48775347D19E504A841A03182B", 00:14:46.178 "uuid": "5d128a48-7753-47d1-9e50-4a841a03182b" 00:14:46.178 } 00:14:46.178 ] 00:14:46.178 } 00:14:46.178 ] 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2231574 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:46.178 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:46.438 [2024-11-19 11:26:00.033682] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:46.438 Malloc4 00:14:46.438 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:46.696 [2024-11-19 11:26:00.282527] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:46.696 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:46.696 Asynchronous Event Request test 00:14:46.696 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:46.696 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:46.696 Registering asynchronous event callbacks... 00:14:46.696 Starting namespace attribute notice tests for all controllers... 00:14:46.696 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:46.696 aer_cb - Changed Namespace 00:14:46.696 Cleaning up... 00:14:46.955 [ 00:14:46.955 { 00:14:46.955 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:46.955 "subtype": "Discovery", 00:14:46.955 "listen_addresses": [], 00:14:46.955 "allow_any_host": true, 00:14:46.955 "hosts": [] 00:14:46.955 }, 00:14:46.955 { 00:14:46.955 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:46.955 "subtype": "NVMe", 00:14:46.955 "listen_addresses": [ 00:14:46.955 { 00:14:46.955 "trtype": "VFIOUSER", 00:14:46.955 "adrfam": "IPv4", 00:14:46.955 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:46.955 "trsvcid": "0" 00:14:46.955 } 00:14:46.955 ], 00:14:46.955 "allow_any_host": true, 00:14:46.955 "hosts": [], 00:14:46.955 "serial_number": "SPDK1", 00:14:46.955 "model_number": "SPDK bdev Controller", 00:14:46.955 "max_namespaces": 32, 00:14:46.955 "min_cntlid": 1, 00:14:46.955 "max_cntlid": 65519, 00:14:46.955 "namespaces": [ 00:14:46.955 { 00:14:46.955 "nsid": 1, 00:14:46.955 "bdev_name": "Malloc1", 00:14:46.955 "name": "Malloc1", 00:14:46.955 "nguid": "617ED97D885B4C8D8B18821AD2145184", 00:14:46.955 "uuid": "617ed97d-885b-4c8d-8b18-821ad2145184" 00:14:46.955 }, 00:14:46.955 { 00:14:46.955 "nsid": 2, 00:14:46.955 "bdev_name": "Malloc3", 00:14:46.955 "name": "Malloc3", 00:14:46.955 "nguid": "6D4107FF097740B08BD22DD3E0C549E8", 00:14:46.955 "uuid": "6d4107ff-0977-40b0-8bd2-2dd3e0c549e8" 00:14:46.955 } 00:14:46.955 ] 00:14:46.955 }, 00:14:46.955 { 00:14:46.955 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:46.955 "subtype": "NVMe", 00:14:46.955 "listen_addresses": [ 00:14:46.955 { 00:14:46.955 "trtype": "VFIOUSER", 00:14:46.955 "adrfam": "IPv4", 00:14:46.955 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:46.955 "trsvcid": "0" 00:14:46.955 } 00:14:46.955 ], 00:14:46.955 "allow_any_host": true, 00:14:46.955 "hosts": [], 00:14:46.955 "serial_number": "SPDK2", 00:14:46.955 "model_number": "SPDK bdev Controller", 00:14:46.955 "max_namespaces": 32, 00:14:46.955 "min_cntlid": 1, 00:14:46.955 "max_cntlid": 65519, 00:14:46.955 "namespaces": [ 00:14:46.955 { 00:14:46.955 "nsid": 1, 00:14:46.955 "bdev_name": "Malloc2", 00:14:46.955 "name": "Malloc2", 00:14:46.955 "nguid": "5D128A48775347D19E504A841A03182B", 00:14:46.955 "uuid": "5d128a48-7753-47d1-9e50-4a841a03182b" 00:14:46.955 }, 00:14:46.955 { 00:14:46.955 "nsid": 2, 00:14:46.955 "bdev_name": "Malloc4", 00:14:46.955 "name": "Malloc4", 00:14:46.955 "nguid": "EC447E6646354983ABAE0966DE9665EB", 00:14:46.955 "uuid": "ec447e66-4635-4983-abae-0966de9665eb" 00:14:46.955 } 00:14:46.955 ] 00:14:46.955 } 00:14:46.955 ] 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2231574 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2223773 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2223773 ']' 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2223773 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2223773 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2223773' 00:14:46.955 killing process with pid 2223773 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2223773 00:14:46.955 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2223773 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2231630 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2231630' 00:14:47.215 Process pid: 2231630 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2231630 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2231630 ']' 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.215 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:47.215 [2024-11-19 11:26:00.849019] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:47.215 [2024-11-19 11:26:00.849885] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:47.215 [2024-11-19 11:26:00.849924] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.215 [2024-11-19 11:26:00.923452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.215 [2024-11-19 11:26:00.961414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.215 [2024-11-19 11:26:00.961452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.215 [2024-11-19 11:26:00.961459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.215 [2024-11-19 11:26:00.961464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.215 [2024-11-19 11:26:00.961469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.215 [2024-11-19 11:26:00.962904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.215 [2024-11-19 11:26:00.963022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.215 [2024-11-19 11:26:00.963056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.215 [2024-11-19 11:26:00.963057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.475 [2024-11-19 11:26:01.032022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:47.475 [2024-11-19 11:26:01.032224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:47.475 [2024-11-19 11:26:01.032863] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:47.475 [2024-11-19 11:26:01.033223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:47.475 [2024-11-19 11:26:01.033264] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:47.475 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.475 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:47.475 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:48.412 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:48.671 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:48.671 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:48.671 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.671 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:48.671 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:48.930 Malloc1 00:14:48.930 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:48.930 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:49.191 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:49.541 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.541 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:49.541 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:49.861 Malloc2 00:14:49.861 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:49.861 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:50.119 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:50.377 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:50.378 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2231630 00:14:50.378 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2231630 ']' 00:14:50.378 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2231630 00:14:50.378 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:50.378 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.378 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2231630 00:14:50.378 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.378 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.378 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2231630' 00:14:50.378 killing process with pid 2231630 00:14:50.378 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2231630 00:14:50.378 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2231630 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:50.637 00:14:50.637 real 0m50.964s 00:14:50.637 user 3m17.055s 00:14:50.637 sys 0m3.304s 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:50.637 ************************************ 00:14:50.637 END TEST nvmf_vfio_user 00:14:50.637 ************************************ 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.637 ************************************ 00:14:50.637 START TEST nvmf_vfio_user_nvme_compliance 00:14:50.637 ************************************ 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:50.637 * Looking for test storage... 00:14:50.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.637 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:50.897 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:50.897 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.897 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:50.897 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.897 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.897 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.897 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:50.897 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.897 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:50.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.897 --rc genhtml_branch_coverage=1 00:14:50.897 --rc genhtml_function_coverage=1 00:14:50.897 --rc genhtml_legend=1 00:14:50.897 --rc geninfo_all_blocks=1 00:14:50.898 --rc geninfo_unexecuted_blocks=1 00:14:50.898 00:14:50.898 ' 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:50.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.898 --rc genhtml_branch_coverage=1 00:14:50.898 --rc genhtml_function_coverage=1 00:14:50.898 --rc genhtml_legend=1 00:14:50.898 --rc geninfo_all_blocks=1 00:14:50.898 --rc geninfo_unexecuted_blocks=1 00:14:50.898 00:14:50.898 ' 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:50.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.898 --rc genhtml_branch_coverage=1 00:14:50.898 --rc genhtml_function_coverage=1 00:14:50.898 --rc genhtml_legend=1 00:14:50.898 --rc geninfo_all_blocks=1 00:14:50.898 --rc geninfo_unexecuted_blocks=1 00:14:50.898 00:14:50.898 ' 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:50.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.898 --rc genhtml_branch_coverage=1 00:14:50.898 --rc genhtml_function_coverage=1 00:14:50.898 --rc genhtml_legend=1 00:14:50.898 --rc geninfo_all_blocks=1 00:14:50.898 --rc geninfo_unexecuted_blocks=1 00:14:50.898 00:14:50.898 ' 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2232394 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2232394' 00:14:50.898 Process pid: 2232394 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:50.898 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:50.899 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2232394 00:14:50.899 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2232394 ']' 00:14:50.899 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.899 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.899 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.899 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.899 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:50.899 [2024-11-19 11:26:04.507431] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:50.899 [2024-11-19 11:26:04.507478] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.899 [2024-11-19 11:26:04.580616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:50.899 [2024-11-19 11:26:04.623080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.899 [2024-11-19 11:26:04.623116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.899 [2024-11-19 11:26:04.623123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.899 [2024-11-19 11:26:04.623130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.899 [2024-11-19 11:26:04.623138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.899 [2024-11-19 11:26:04.624457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.899 [2024-11-19 11:26:04.624564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.899 [2024-11-19 11:26:04.624566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.158 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.158 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:51.158 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:52.095 malloc0 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.095 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:52.354 00:14:52.354 00:14:52.354 CUnit - A unit testing framework for C - Version 2.1-3 00:14:52.354 http://cunit.sourceforge.net/ 00:14:52.354 00:14:52.354 00:14:52.354 Suite: nvme_compliance 00:14:52.354 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 11:26:05.961405] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.354 [2024-11-19 11:26:05.962755] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:52.354 [2024-11-19 11:26:05.962769] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:52.354 [2024-11-19 11:26:05.962775] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:52.354 [2024-11-19 11:26:05.964431] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.354 passed 00:14:52.354 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 11:26:06.047003] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.354 [2024-11-19 11:26:06.050023] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.354 passed 00:14:52.354 Test: admin_identify_ns ...[2024-11-19 11:26:06.126421] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.612 [2024-11-19 11:26:06.189966] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:52.613 [2024-11-19 11:26:06.197958] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:52.613 [2024-11-19 11:26:06.219061] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.613 passed 00:14:52.613 Test: admin_get_features_mandatory_features ...[2024-11-19 11:26:06.293046] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.613 [2024-11-19 11:26:06.296066] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.613 passed 00:14:52.613 Test: admin_get_features_optional_features ...[2024-11-19 11:26:06.374575] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.613 [2024-11-19 11:26:06.377596] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.871 passed 00:14:52.871 Test: admin_set_features_number_of_queues ...[2024-11-19 11:26:06.455424] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.871 [2024-11-19 11:26:06.564035] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.871 passed 00:14:52.871 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 11:26:06.637999] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.871 [2024-11-19 11:26:06.641019] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.131 passed 00:14:53.131 Test: admin_get_log_page_with_lpo ...[2024-11-19 11:26:06.718907] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.131 [2024-11-19 11:26:06.787964] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:53.131 [2024-11-19 11:26:06.801024] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.131 passed 00:14:53.131 Test: fabric_property_get ...[2024-11-19 11:26:06.876137] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.131 [2024-11-19 11:26:06.877392] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:53.131 [2024-11-19 11:26:06.880168] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.131 passed 00:14:53.390 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 11:26:06.956664] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.390 [2024-11-19 11:26:06.957892] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:53.390 [2024-11-19 11:26:06.960701] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.390 passed 00:14:53.390 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 11:26:07.038443] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.390 [2024-11-19 11:26:07.121957] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:53.390 [2024-11-19 11:26:07.132955] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:53.390 [2024-11-19 11:26:07.138054] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.390 passed 00:14:53.649 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 11:26:07.213187] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.649 [2024-11-19 11:26:07.214420] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:53.649 [2024-11-19 11:26:07.216202] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.649 passed 00:14:53.649 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 11:26:07.293113] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.649 [2024-11-19 11:26:07.369963] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:53.649 [2024-11-19 11:26:07.393955] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:53.649 [2024-11-19 11:26:07.399033] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.649 passed 00:14:53.908 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 11:26:07.476955] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.908 [2024-11-19 11:26:07.478211] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:53.908 [2024-11-19 11:26:07.478236] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:53.908 [2024-11-19 11:26:07.479981] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:53.908 passed 00:14:53.908 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 11:26:07.558936] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:53.908 [2024-11-19 11:26:07.651953] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:53.908 [2024-11-19 11:26:07.659960] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:53.908 [2024-11-19 11:26:07.667953] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:53.908 [2024-11-19 11:26:07.675952] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:54.167 [2024-11-19 11:26:07.705037] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.167 passed 00:14:54.167 Test: admin_create_io_sq_verify_pc ...[2024-11-19 11:26:07.781386] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.167 [2024-11-19 11:26:07.797960] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:54.167 [2024-11-19 11:26:07.815550] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.167 passed 00:14:54.167 Test: admin_create_io_qp_max_qps ...[2024-11-19 11:26:07.893104] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.545 [2024-11-19 11:26:08.988957] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:55.804 [2024-11-19 11:26:09.361715] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:55.804 passed 00:14:55.804 Test: admin_create_io_sq_shared_cq ...[2024-11-19 11:26:09.437877] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:55.804 [2024-11-19 11:26:09.570957] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:56.063 [2024-11-19 11:26:09.608012] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:56.063 passed 00:14:56.063 00:14:56.063 Run Summary: Type Total Ran Passed Failed Inactive 00:14:56.063 suites 1 1 n/a 0 0 00:14:56.063 tests 18 18 18 0 0 00:14:56.063 asserts 360 360 360 0 n/a 00:14:56.063 00:14:56.063 Elapsed time = 1.495 seconds 00:14:56.063 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2232394 00:14:56.063 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2232394 ']' 00:14:56.063 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2232394 00:14:56.063 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:56.063 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.063 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2232394 00:14:56.063 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:56.063 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:56.063 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2232394' 00:14:56.063 killing process with pid 2232394 00:14:56.063 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2232394 00:14:56.063 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2232394 00:14:56.323 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:56.323 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:56.323 00:14:56.323 real 0m5.641s 00:14:56.323 user 0m15.768s 00:14:56.323 sys 0m0.527s 00:14:56.323 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.323 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.323 ************************************ 00:14:56.323 END TEST nvmf_vfio_user_nvme_compliance 00:14:56.323 ************************************ 00:14:56.323 11:26:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:56.323 11:26:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.323 11:26:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.323 11:26:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.323 ************************************ 00:14:56.323 START TEST nvmf_vfio_user_fuzz 00:14:56.323 ************************************ 00:14:56.323 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:56.323 * Looking for test storage... 00:14:56.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.323 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:56.323 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:56.323 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:56.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.583 --rc genhtml_branch_coverage=1 00:14:56.583 --rc genhtml_function_coverage=1 00:14:56.583 --rc genhtml_legend=1 00:14:56.583 --rc geninfo_all_blocks=1 00:14:56.583 --rc geninfo_unexecuted_blocks=1 00:14:56.583 00:14:56.583 ' 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:56.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.583 --rc genhtml_branch_coverage=1 00:14:56.583 --rc genhtml_function_coverage=1 00:14:56.583 --rc genhtml_legend=1 00:14:56.583 --rc geninfo_all_blocks=1 00:14:56.583 --rc geninfo_unexecuted_blocks=1 00:14:56.583 00:14:56.583 ' 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:56.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.583 --rc genhtml_branch_coverage=1 00:14:56.583 --rc genhtml_function_coverage=1 00:14:56.583 --rc genhtml_legend=1 00:14:56.583 --rc geninfo_all_blocks=1 00:14:56.583 --rc geninfo_unexecuted_blocks=1 00:14:56.583 00:14:56.583 ' 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:56.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.583 --rc genhtml_branch_coverage=1 00:14:56.583 --rc genhtml_function_coverage=1 00:14:56.583 --rc genhtml_legend=1 00:14:56.583 --rc geninfo_all_blocks=1 00:14:56.583 --rc geninfo_unexecuted_blocks=1 00:14:56.583 00:14:56.583 ' 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.583 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2233384 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2233384' 00:14:56.584 Process pid: 2233384 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2233384 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2233384 ']' 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.584 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:56.844 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.844 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:56.844 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:57.780 malloc0 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.780 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:57.781 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.781 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:57.781 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.781 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:57.781 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:29.865 Fuzzing completed. Shutting down the fuzz application 00:15:29.865 00:15:29.865 Dumping successful admin opcodes: 00:15:29.865 8, 9, 10, 24, 00:15:29.865 Dumping successful io opcodes: 00:15:29.865 0, 00:15:29.865 NS: 0x20000081ef00 I/O qp, Total commands completed: 1004682, total successful commands: 3938, random_seed: 3683647680 00:15:29.865 NS: 0x20000081ef00 admin qp, Total commands completed: 241881, total successful commands: 1943, random_seed: 322333824 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2233384 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2233384 ']' 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2233384 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2233384 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2233384' 00:15:29.865 killing process with pid 2233384 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2233384 00:15:29.865 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2233384 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:29.865 00:15:29.865 real 0m32.207s 00:15:29.865 user 0m29.692s 00:15:29.865 sys 0m31.894s 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:29.865 ************************************ 00:15:29.865 END TEST nvmf_vfio_user_fuzz 00:15:29.865 ************************************ 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:29.865 ************************************ 00:15:29.865 START TEST nvmf_auth_target 00:15:29.865 ************************************ 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:29.865 * Looking for test storage... 00:15:29.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:29.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.865 --rc genhtml_branch_coverage=1 00:15:29.865 --rc genhtml_function_coverage=1 00:15:29.865 --rc genhtml_legend=1 00:15:29.865 --rc geninfo_all_blocks=1 00:15:29.865 --rc geninfo_unexecuted_blocks=1 00:15:29.865 00:15:29.865 ' 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:29.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.865 --rc genhtml_branch_coverage=1 00:15:29.865 --rc genhtml_function_coverage=1 00:15:29.865 --rc genhtml_legend=1 00:15:29.865 --rc geninfo_all_blocks=1 00:15:29.865 --rc geninfo_unexecuted_blocks=1 00:15:29.865 00:15:29.865 ' 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:29.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.865 --rc genhtml_branch_coverage=1 00:15:29.865 --rc genhtml_function_coverage=1 00:15:29.865 --rc genhtml_legend=1 00:15:29.865 --rc geninfo_all_blocks=1 00:15:29.865 --rc geninfo_unexecuted_blocks=1 00:15:29.865 00:15:29.865 ' 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:29.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.865 --rc genhtml_branch_coverage=1 00:15:29.865 --rc genhtml_function_coverage=1 00:15:29.865 --rc genhtml_legend=1 00:15:29.865 --rc geninfo_all_blocks=1 00:15:29.865 --rc geninfo_unexecuted_blocks=1 00:15:29.865 00:15:29.865 ' 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.865 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:29.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:29.866 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.139 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:35.140 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:35.140 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:35.140 Found net devices under 0000:86:00.0: cvl_0_0 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:35.140 Found net devices under 0000:86:00.1: cvl_0_1 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:35.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:15:35.140 00:15:35.140 --- 10.0.0.2 ping statistics --- 00:15:35.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.140 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:15:35.140 00:15:35.140 --- 10.0.0.1 ping statistics --- 00:15:35.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.140 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2241686 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2241686 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2241686 ']' 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.140 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2241753 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f30505fc068453c69f79f74aca47cf75c291a65b6f84d990 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XSA 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f30505fc068453c69f79f74aca47cf75c291a65b6f84d990 0 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f30505fc068453c69f79f74aca47cf75c291a65b6f84d990 0 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f30505fc068453c69f79f74aca47cf75c291a65b6f84d990 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XSA 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XSA 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.XSA 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c99a7b7a8fc95bd2c41a91f3c110e091aac1ca858c5502d9568a1c66647ac061 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bhD 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c99a7b7a8fc95bd2c41a91f3c110e091aac1ca858c5502d9568a1c66647ac061 3 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c99a7b7a8fc95bd2c41a91f3c110e091aac1ca858c5502d9568a1c66647ac061 3 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c99a7b7a8fc95bd2c41a91f3c110e091aac1ca858c5502d9568a1c66647ac061 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bhD 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bhD 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.bhD 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ae54d21ba93fadfcf22b2a4362fea1a6 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.E4S 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ae54d21ba93fadfcf22b2a4362fea1a6 1 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ae54d21ba93fadfcf22b2a4362fea1a6 1 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ae54d21ba93fadfcf22b2a4362fea1a6 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.E4S 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.E4S 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.E4S 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=470f784611378d178487837706754a43c41ee7599afb81f9 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.CKs 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 470f784611378d178487837706754a43c41ee7599afb81f9 2 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 470f784611378d178487837706754a43c41ee7599afb81f9 2 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=470f784611378d178487837706754a43c41ee7599afb81f9 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.CKs 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.CKs 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.CKs 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d6a4553e140b3ac803bbf4defe3502185324be1c0f6054e1 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.p59 00:15:35.141 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d6a4553e140b3ac803bbf4defe3502185324be1c0f6054e1 2 00:15:35.142 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d6a4553e140b3ac803bbf4defe3502185324be1c0f6054e1 2 00:15:35.142 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:35.142 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:35.142 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d6a4553e140b3ac803bbf4defe3502185324be1c0f6054e1 00:15:35.142 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:35.142 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:35.400 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.p59 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.p59 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.p59 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3b30e9191f093e852c2feb70e4f78e11 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.AZ6 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3b30e9191f093e852c2feb70e4f78e11 1 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3b30e9191f093e852c2feb70e4f78e11 1 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3b30e9191f093e852c2feb70e4f78e11 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:35.401 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.AZ6 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.AZ6 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.AZ6 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e3866f1be04cca278b4323710239b640f0aa74cad7f554233cfa6a055b660a23 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.06n 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e3866f1be04cca278b4323710239b640f0aa74cad7f554233cfa6a055b660a23 3 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e3866f1be04cca278b4323710239b640f0aa74cad7f554233cfa6a055b660a23 3 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e3866f1be04cca278b4323710239b640f0aa74cad7f554233cfa6a055b660a23 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.06n 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.06n 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.06n 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2241686 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2241686 ']' 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.401 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.659 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.659 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:35.659 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2241753 /var/tmp/host.sock 00:15:35.659 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2241753 ']' 00:15:35.659 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:35.659 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.659 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:35.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:35.659 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.659 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XSA 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.XSA 00:15:35.918 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.XSA 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.bhD ]] 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bhD 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bhD 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bhD 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.E4S 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.E4S 00:15:36.177 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.E4S 00:15:36.435 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.CKs ]] 00:15:36.435 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CKs 00:15:36.435 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.435 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.435 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.435 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CKs 00:15:36.435 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CKs 00:15:36.693 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:36.693 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.p59 00:15:36.693 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.693 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.693 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.693 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.p59 00:15:36.693 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.p59 00:15:36.950 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.AZ6 ]] 00:15:36.950 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AZ6 00:15:36.950 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.950 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.950 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.950 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AZ6 00:15:36.950 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AZ6 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.06n 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.06n 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.06n 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:37.209 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.467 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.726 00:15:37.726 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.726 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.726 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.984 { 00:15:37.984 "cntlid": 1, 00:15:37.984 "qid": 0, 00:15:37.984 "state": "enabled", 00:15:37.984 "thread": "nvmf_tgt_poll_group_000", 00:15:37.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:37.984 "listen_address": { 00:15:37.984 "trtype": "TCP", 00:15:37.984 "adrfam": "IPv4", 00:15:37.984 "traddr": "10.0.0.2", 00:15:37.984 "trsvcid": "4420" 00:15:37.984 }, 00:15:37.984 "peer_address": { 00:15:37.984 "trtype": "TCP", 00:15:37.984 "adrfam": "IPv4", 00:15:37.984 "traddr": "10.0.0.1", 00:15:37.984 "trsvcid": "54032" 00:15:37.984 }, 00:15:37.984 "auth": { 00:15:37.984 "state": "completed", 00:15:37.984 "digest": "sha256", 00:15:37.984 "dhgroup": "null" 00:15:37.984 } 00:15:37.984 } 00:15:37.984 ]' 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.984 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.243 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:15:38.243 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:15:38.810 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.810 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.810 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.810 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.810 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.810 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.810 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.810 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.069 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.328 00:15:39.328 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.328 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.328 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.587 { 00:15:39.587 "cntlid": 3, 00:15:39.587 "qid": 0, 00:15:39.587 "state": "enabled", 00:15:39.587 "thread": "nvmf_tgt_poll_group_000", 00:15:39.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.587 "listen_address": { 00:15:39.587 "trtype": "TCP", 00:15:39.587 "adrfam": "IPv4", 00:15:39.587 "traddr": "10.0.0.2", 00:15:39.587 "trsvcid": "4420" 00:15:39.587 }, 00:15:39.587 "peer_address": { 00:15:39.587 "trtype": "TCP", 00:15:39.587 "adrfam": "IPv4", 00:15:39.587 "traddr": "10.0.0.1", 00:15:39.587 "trsvcid": "54058" 00:15:39.587 }, 00:15:39.587 "auth": { 00:15:39.587 "state": "completed", 00:15:39.587 "digest": "sha256", 00:15:39.587 "dhgroup": "null" 00:15:39.587 } 00:15:39.587 } 00:15:39.587 ]' 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.587 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.846 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:15:39.846 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:15:40.413 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.413 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.413 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.413 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.413 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.413 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.413 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:40.413 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.672 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.930 00:15:40.930 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.930 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.930 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.189 { 00:15:41.189 "cntlid": 5, 00:15:41.189 "qid": 0, 00:15:41.189 "state": "enabled", 00:15:41.189 "thread": "nvmf_tgt_poll_group_000", 00:15:41.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:41.189 "listen_address": { 00:15:41.189 "trtype": "TCP", 00:15:41.189 "adrfam": "IPv4", 00:15:41.189 "traddr": "10.0.0.2", 00:15:41.189 "trsvcid": "4420" 00:15:41.189 }, 00:15:41.189 "peer_address": { 00:15:41.189 "trtype": "TCP", 00:15:41.189 "adrfam": "IPv4", 00:15:41.189 "traddr": "10.0.0.1", 00:15:41.189 "trsvcid": "57266" 00:15:41.189 }, 00:15:41.189 "auth": { 00:15:41.189 "state": "completed", 00:15:41.189 "digest": "sha256", 00:15:41.189 "dhgroup": "null" 00:15:41.189 } 00:15:41.189 } 00:15:41.189 ]' 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.189 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.448 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:15:41.448 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:15:42.016 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.016 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:42.016 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.016 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.016 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.016 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.016 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:42.016 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.275 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.533 00:15:42.533 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.533 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.533 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.793 { 00:15:42.793 "cntlid": 7, 00:15:42.793 "qid": 0, 00:15:42.793 "state": "enabled", 00:15:42.793 "thread": "nvmf_tgt_poll_group_000", 00:15:42.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.793 "listen_address": { 00:15:42.793 "trtype": "TCP", 00:15:42.793 "adrfam": "IPv4", 00:15:42.793 "traddr": "10.0.0.2", 00:15:42.793 "trsvcid": "4420" 00:15:42.793 }, 00:15:42.793 "peer_address": { 00:15:42.793 "trtype": "TCP", 00:15:42.793 "adrfam": "IPv4", 00:15:42.793 "traddr": "10.0.0.1", 00:15:42.793 "trsvcid": "57282" 00:15:42.793 }, 00:15:42.793 "auth": { 00:15:42.793 "state": "completed", 00:15:42.793 "digest": "sha256", 00:15:42.793 "dhgroup": "null" 00:15:42.793 } 00:15:42.793 } 00:15:42.793 ]' 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.793 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.051 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:15:43.051 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:15:43.619 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.619 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.619 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.619 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.619 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.619 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.619 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.619 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.619 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.878 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:43.878 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.878 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.878 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:43.878 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:43.879 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.879 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.879 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.879 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.879 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.879 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.879 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.879 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.137 00:15:44.138 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.138 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.138 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.396 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.397 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.397 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.397 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.397 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.397 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.397 { 00:15:44.397 "cntlid": 9, 00:15:44.397 "qid": 0, 00:15:44.397 "state": "enabled", 00:15:44.397 "thread": "nvmf_tgt_poll_group_000", 00:15:44.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.397 "listen_address": { 00:15:44.397 "trtype": "TCP", 00:15:44.397 "adrfam": "IPv4", 00:15:44.397 "traddr": "10.0.0.2", 00:15:44.397 "trsvcid": "4420" 00:15:44.397 }, 00:15:44.397 "peer_address": { 00:15:44.397 "trtype": "TCP", 00:15:44.397 "adrfam": "IPv4", 00:15:44.397 "traddr": "10.0.0.1", 00:15:44.397 "trsvcid": "57304" 00:15:44.397 }, 00:15:44.397 "auth": { 00:15:44.397 "state": "completed", 00:15:44.397 "digest": "sha256", 00:15:44.397 "dhgroup": "ffdhe2048" 00:15:44.397 } 00:15:44.397 } 00:15:44.397 ]' 00:15:44.397 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.397 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.397 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.397 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:44.397 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.397 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.397 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.397 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.656 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:15:44.656 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:15:45.222 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.223 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.223 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.223 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.223 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.223 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.223 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:45.223 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.481 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.740 00:15:45.740 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.740 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.740 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.998 { 00:15:45.998 "cntlid": 11, 00:15:45.998 "qid": 0, 00:15:45.998 "state": "enabled", 00:15:45.998 "thread": "nvmf_tgt_poll_group_000", 00:15:45.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.998 "listen_address": { 00:15:45.998 "trtype": "TCP", 00:15:45.998 "adrfam": "IPv4", 00:15:45.998 "traddr": "10.0.0.2", 00:15:45.998 "trsvcid": "4420" 00:15:45.998 }, 00:15:45.998 "peer_address": { 00:15:45.998 "trtype": "TCP", 00:15:45.998 "adrfam": "IPv4", 00:15:45.998 "traddr": "10.0.0.1", 00:15:45.998 "trsvcid": "57330" 00:15:45.998 }, 00:15:45.998 "auth": { 00:15:45.998 "state": "completed", 00:15:45.998 "digest": "sha256", 00:15:45.998 "dhgroup": "ffdhe2048" 00:15:45.998 } 00:15:45.998 } 00:15:45.998 ]' 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.998 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.257 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:15:46.257 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:15:46.823 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.823 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.823 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.823 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.823 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.823 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.823 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.823 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.082 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.341 00:15:47.341 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.341 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.341 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.601 { 00:15:47.601 "cntlid": 13, 00:15:47.601 "qid": 0, 00:15:47.601 "state": "enabled", 00:15:47.601 "thread": "nvmf_tgt_poll_group_000", 00:15:47.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.601 "listen_address": { 00:15:47.601 "trtype": "TCP", 00:15:47.601 "adrfam": "IPv4", 00:15:47.601 "traddr": "10.0.0.2", 00:15:47.601 "trsvcid": "4420" 00:15:47.601 }, 00:15:47.601 "peer_address": { 00:15:47.601 "trtype": "TCP", 00:15:47.601 "adrfam": "IPv4", 00:15:47.601 "traddr": "10.0.0.1", 00:15:47.601 "trsvcid": "57358" 00:15:47.601 }, 00:15:47.601 "auth": { 00:15:47.601 "state": "completed", 00:15:47.601 "digest": "sha256", 00:15:47.601 "dhgroup": "ffdhe2048" 00:15:47.601 } 00:15:47.601 } 00:15:47.601 ]' 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.601 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.861 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:15:47.861 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:15:48.426 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.426 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.426 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.426 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.426 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.426 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.426 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:48.426 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.684 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.943 00:15:48.943 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.943 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.943 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.201 { 00:15:49.201 "cntlid": 15, 00:15:49.201 "qid": 0, 00:15:49.201 "state": "enabled", 00:15:49.201 "thread": "nvmf_tgt_poll_group_000", 00:15:49.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.201 "listen_address": { 00:15:49.201 "trtype": "TCP", 00:15:49.201 "adrfam": "IPv4", 00:15:49.201 "traddr": "10.0.0.2", 00:15:49.201 "trsvcid": "4420" 00:15:49.201 }, 00:15:49.201 "peer_address": { 00:15:49.201 "trtype": "TCP", 00:15:49.201 "adrfam": "IPv4", 00:15:49.201 "traddr": "10.0.0.1", 00:15:49.201 "trsvcid": "57388" 00:15:49.201 }, 00:15:49.201 "auth": { 00:15:49.201 "state": "completed", 00:15:49.201 "digest": "sha256", 00:15:49.201 "dhgroup": "ffdhe2048" 00:15:49.201 } 00:15:49.201 } 00:15:49.201 ]' 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.201 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.460 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:15:49.460 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:15:50.026 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.027 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.027 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.027 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.027 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.027 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.027 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.027 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.027 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.285 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.286 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.544 00:15:50.544 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.544 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.544 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.803 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.803 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.803 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.803 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.803 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.803 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.803 { 00:15:50.803 "cntlid": 17, 00:15:50.803 "qid": 0, 00:15:50.803 "state": "enabled", 00:15:50.803 "thread": "nvmf_tgt_poll_group_000", 00:15:50.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.803 "listen_address": { 00:15:50.803 "trtype": "TCP", 00:15:50.803 "adrfam": "IPv4", 00:15:50.803 "traddr": "10.0.0.2", 00:15:50.803 "trsvcid": "4420" 00:15:50.803 }, 00:15:50.803 "peer_address": { 00:15:50.803 "trtype": "TCP", 00:15:50.803 "adrfam": "IPv4", 00:15:50.803 "traddr": "10.0.0.1", 00:15:50.803 "trsvcid": "40846" 00:15:50.803 }, 00:15:50.803 "auth": { 00:15:50.803 "state": "completed", 00:15:50.803 "digest": "sha256", 00:15:50.803 "dhgroup": "ffdhe3072" 00:15:50.803 } 00:15:50.803 } 00:15:50.803 ]' 00:15:50.803 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.803 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.803 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.803 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.803 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.062 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.062 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.062 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.062 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:15:51.063 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:15:51.629 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.629 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.629 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.629 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.888 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.888 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.888 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.888 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.888 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:51.888 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.888 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.888 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:51.888 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.888 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.889 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.889 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.889 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.889 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.889 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.889 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.889 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.147 00:15:52.147 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.147 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.147 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.406 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.406 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.406 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.406 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.406 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.406 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.406 { 00:15:52.406 "cntlid": 19, 00:15:52.406 "qid": 0, 00:15:52.406 "state": "enabled", 00:15:52.406 "thread": "nvmf_tgt_poll_group_000", 00:15:52.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:52.406 "listen_address": { 00:15:52.406 "trtype": "TCP", 00:15:52.406 "adrfam": "IPv4", 00:15:52.406 "traddr": "10.0.0.2", 00:15:52.406 "trsvcid": "4420" 00:15:52.406 }, 00:15:52.406 "peer_address": { 00:15:52.406 "trtype": "TCP", 00:15:52.406 "adrfam": "IPv4", 00:15:52.406 "traddr": "10.0.0.1", 00:15:52.406 "trsvcid": "40882" 00:15:52.406 }, 00:15:52.406 "auth": { 00:15:52.406 "state": "completed", 00:15:52.406 "digest": "sha256", 00:15:52.406 "dhgroup": "ffdhe3072" 00:15:52.406 } 00:15:52.406 } 00:15:52.406 ]' 00:15:52.406 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.406 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.406 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.406 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.406 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.665 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.665 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.665 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.665 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:15:52.665 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:15:53.232 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.232 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.232 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.232 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.232 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.232 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.232 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.232 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.490 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:53.490 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.490 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.490 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:53.490 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.490 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.490 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.490 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.490 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.490 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.490 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.491 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.491 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.749 00:15:53.749 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.749 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.749 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.008 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.008 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.008 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.008 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.008 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.008 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.008 { 00:15:54.008 "cntlid": 21, 00:15:54.008 "qid": 0, 00:15:54.008 "state": "enabled", 00:15:54.008 "thread": "nvmf_tgt_poll_group_000", 00:15:54.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.008 "listen_address": { 00:15:54.008 "trtype": "TCP", 00:15:54.008 "adrfam": "IPv4", 00:15:54.008 "traddr": "10.0.0.2", 00:15:54.008 "trsvcid": "4420" 00:15:54.008 }, 00:15:54.008 "peer_address": { 00:15:54.008 "trtype": "TCP", 00:15:54.008 "adrfam": "IPv4", 00:15:54.008 "traddr": "10.0.0.1", 00:15:54.008 "trsvcid": "40904" 00:15:54.008 }, 00:15:54.008 "auth": { 00:15:54.008 "state": "completed", 00:15:54.008 "digest": "sha256", 00:15:54.008 "dhgroup": "ffdhe3072" 00:15:54.008 } 00:15:54.008 } 00:15:54.008 ]' 00:15:54.008 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.008 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.008 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.267 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:54.267 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.267 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.267 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.267 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.267 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:15:54.267 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:15:54.835 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.835 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.835 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.835 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.093 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.093 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.093 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:55.093 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:55.093 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:55.093 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.093 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.093 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:55.093 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.094 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.094 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:55.094 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.094 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.094 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.094 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.094 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.094 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.352 00:15:55.352 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.352 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.352 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.610 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.610 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.610 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.610 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.610 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.610 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.610 { 00:15:55.610 "cntlid": 23, 00:15:55.610 "qid": 0, 00:15:55.610 "state": "enabled", 00:15:55.610 "thread": "nvmf_tgt_poll_group_000", 00:15:55.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.610 "listen_address": { 00:15:55.610 "trtype": "TCP", 00:15:55.610 "adrfam": "IPv4", 00:15:55.610 "traddr": "10.0.0.2", 00:15:55.610 "trsvcid": "4420" 00:15:55.610 }, 00:15:55.610 "peer_address": { 00:15:55.611 "trtype": "TCP", 00:15:55.611 "adrfam": "IPv4", 00:15:55.611 "traddr": "10.0.0.1", 00:15:55.611 "trsvcid": "40914" 00:15:55.611 }, 00:15:55.611 "auth": { 00:15:55.611 "state": "completed", 00:15:55.611 "digest": "sha256", 00:15:55.611 "dhgroup": "ffdhe3072" 00:15:55.611 } 00:15:55.611 } 00:15:55.611 ]' 00:15:55.611 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.611 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.611 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.869 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.869 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.869 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.869 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.869 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.869 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:15:55.869 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:15:56.435 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.694 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.695 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.695 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.695 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.695 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.695 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.953 00:15:57.219 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.219 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.219 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.219 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.219 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.219 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.219 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.219 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.219 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.219 { 00:15:57.219 "cntlid": 25, 00:15:57.219 "qid": 0, 00:15:57.219 "state": "enabled", 00:15:57.219 "thread": "nvmf_tgt_poll_group_000", 00:15:57.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.219 "listen_address": { 00:15:57.219 "trtype": "TCP", 00:15:57.219 "adrfam": "IPv4", 00:15:57.219 "traddr": "10.0.0.2", 00:15:57.219 "trsvcid": "4420" 00:15:57.219 }, 00:15:57.219 "peer_address": { 00:15:57.219 "trtype": "TCP", 00:15:57.219 "adrfam": "IPv4", 00:15:57.219 "traddr": "10.0.0.1", 00:15:57.219 "trsvcid": "40940" 00:15:57.219 }, 00:15:57.219 "auth": { 00:15:57.219 "state": "completed", 00:15:57.219 "digest": "sha256", 00:15:57.219 "dhgroup": "ffdhe4096" 00:15:57.219 } 00:15:57.219 } 00:15:57.219 ]' 00:15:57.219 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.515 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.515 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.515 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.515 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.515 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.515 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.515 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.515 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:15:57.516 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:15:58.128 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.128 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.128 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.128 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.128 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.128 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.128 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.128 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.388 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.647 00:15:58.647 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.647 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.647 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.906 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.906 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.906 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.906 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.906 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.906 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.906 { 00:15:58.906 "cntlid": 27, 00:15:58.906 "qid": 0, 00:15:58.906 "state": "enabled", 00:15:58.906 "thread": "nvmf_tgt_poll_group_000", 00:15:58.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.906 "listen_address": { 00:15:58.906 "trtype": "TCP", 00:15:58.906 "adrfam": "IPv4", 00:15:58.906 "traddr": "10.0.0.2", 00:15:58.906 "trsvcid": "4420" 00:15:58.906 }, 00:15:58.906 "peer_address": { 00:15:58.906 "trtype": "TCP", 00:15:58.906 "adrfam": "IPv4", 00:15:58.906 "traddr": "10.0.0.1", 00:15:58.906 "trsvcid": "40968" 00:15:58.906 }, 00:15:58.906 "auth": { 00:15:58.906 "state": "completed", 00:15:58.906 "digest": "sha256", 00:15:58.906 "dhgroup": "ffdhe4096" 00:15:58.906 } 00:15:58.906 } 00:15:58.906 ]' 00:15:58.906 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.906 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.906 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.906 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.906 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.165 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.165 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.165 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.165 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:15:59.165 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:15:59.731 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.731 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.732 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.732 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.732 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.732 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.732 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.732 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.990 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.249 00:16:00.249 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.249 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.249 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.507 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.507 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.507 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.507 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.507 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.507 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.507 { 00:16:00.507 "cntlid": 29, 00:16:00.507 "qid": 0, 00:16:00.507 "state": "enabled", 00:16:00.507 "thread": "nvmf_tgt_poll_group_000", 00:16:00.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.507 "listen_address": { 00:16:00.507 "trtype": "TCP", 00:16:00.507 "adrfam": "IPv4", 00:16:00.507 "traddr": "10.0.0.2", 00:16:00.507 "trsvcid": "4420" 00:16:00.507 }, 00:16:00.507 "peer_address": { 00:16:00.507 "trtype": "TCP", 00:16:00.507 "adrfam": "IPv4", 00:16:00.507 "traddr": "10.0.0.1", 00:16:00.507 "trsvcid": "52760" 00:16:00.507 }, 00:16:00.507 "auth": { 00:16:00.507 "state": "completed", 00:16:00.507 "digest": "sha256", 00:16:00.507 "dhgroup": "ffdhe4096" 00:16:00.507 } 00:16:00.507 } 00:16:00.508 ]' 00:16:00.508 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.508 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.508 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.766 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.766 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.766 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.766 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.766 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.024 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:01.024 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.592 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.851 00:16:02.110 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.110 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.110 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.110 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.110 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.110 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.110 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.110 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.110 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.110 { 00:16:02.110 "cntlid": 31, 00:16:02.110 "qid": 0, 00:16:02.110 "state": "enabled", 00:16:02.110 "thread": "nvmf_tgt_poll_group_000", 00:16:02.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.110 "listen_address": { 00:16:02.110 "trtype": "TCP", 00:16:02.110 "adrfam": "IPv4", 00:16:02.110 "traddr": "10.0.0.2", 00:16:02.110 "trsvcid": "4420" 00:16:02.110 }, 00:16:02.110 "peer_address": { 00:16:02.110 "trtype": "TCP", 00:16:02.110 "adrfam": "IPv4", 00:16:02.110 "traddr": "10.0.0.1", 00:16:02.110 "trsvcid": "52786" 00:16:02.110 }, 00:16:02.110 "auth": { 00:16:02.110 "state": "completed", 00:16:02.110 "digest": "sha256", 00:16:02.110 "dhgroup": "ffdhe4096" 00:16:02.110 } 00:16:02.110 } 00:16:02.110 ]' 00:16:02.110 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.369 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.369 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.369 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.369 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.369 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.369 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.369 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.629 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:02.629 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:03.197 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.197 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.197 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.197 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.197 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.198 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.198 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.198 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.198 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.457 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.716 00:16:03.716 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.716 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.716 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.975 { 00:16:03.975 "cntlid": 33, 00:16:03.975 "qid": 0, 00:16:03.975 "state": "enabled", 00:16:03.975 "thread": "nvmf_tgt_poll_group_000", 00:16:03.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.975 "listen_address": { 00:16:03.975 "trtype": "TCP", 00:16:03.975 "adrfam": "IPv4", 00:16:03.975 "traddr": "10.0.0.2", 00:16:03.975 "trsvcid": "4420" 00:16:03.975 }, 00:16:03.975 "peer_address": { 00:16:03.975 "trtype": "TCP", 00:16:03.975 "adrfam": "IPv4", 00:16:03.975 "traddr": "10.0.0.1", 00:16:03.975 "trsvcid": "52810" 00:16:03.975 }, 00:16:03.975 "auth": { 00:16:03.975 "state": "completed", 00:16:03.975 "digest": "sha256", 00:16:03.975 "dhgroup": "ffdhe6144" 00:16:03.975 } 00:16:03.975 } 00:16:03.975 ]' 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.975 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.234 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:04.234 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:04.803 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.803 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.803 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.803 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.803 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.803 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.803 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.803 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.063 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.322 00:16:05.322 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.322 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.322 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.581 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.581 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.581 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.581 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.581 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.581 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.581 { 00:16:05.581 "cntlid": 35, 00:16:05.581 "qid": 0, 00:16:05.581 "state": "enabled", 00:16:05.581 "thread": "nvmf_tgt_poll_group_000", 00:16:05.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.581 "listen_address": { 00:16:05.581 "trtype": "TCP", 00:16:05.581 "adrfam": "IPv4", 00:16:05.581 "traddr": "10.0.0.2", 00:16:05.581 "trsvcid": "4420" 00:16:05.581 }, 00:16:05.581 "peer_address": { 00:16:05.581 "trtype": "TCP", 00:16:05.581 "adrfam": "IPv4", 00:16:05.581 "traddr": "10.0.0.1", 00:16:05.581 "trsvcid": "52838" 00:16:05.581 }, 00:16:05.581 "auth": { 00:16:05.581 "state": "completed", 00:16:05.581 "digest": "sha256", 00:16:05.581 "dhgroup": "ffdhe6144" 00:16:05.581 } 00:16:05.581 } 00:16:05.581 ]' 00:16:05.581 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.581 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.581 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.839 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.839 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.839 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.839 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.839 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.098 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:06.098 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.667 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.236 00:16:07.236 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.236 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.236 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.495 { 00:16:07.495 "cntlid": 37, 00:16:07.495 "qid": 0, 00:16:07.495 "state": "enabled", 00:16:07.495 "thread": "nvmf_tgt_poll_group_000", 00:16:07.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.495 "listen_address": { 00:16:07.495 "trtype": "TCP", 00:16:07.495 "adrfam": "IPv4", 00:16:07.495 "traddr": "10.0.0.2", 00:16:07.495 "trsvcid": "4420" 00:16:07.495 }, 00:16:07.495 "peer_address": { 00:16:07.495 "trtype": "TCP", 00:16:07.495 "adrfam": "IPv4", 00:16:07.495 "traddr": "10.0.0.1", 00:16:07.495 "trsvcid": "52866" 00:16:07.495 }, 00:16:07.495 "auth": { 00:16:07.495 "state": "completed", 00:16:07.495 "digest": "sha256", 00:16:07.495 "dhgroup": "ffdhe6144" 00:16:07.495 } 00:16:07.495 } 00:16:07.495 ]' 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.495 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.754 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:07.754 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:08.323 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.323 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.323 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.323 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.323 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.323 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.323 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.323 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.583 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.842 00:16:08.842 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.842 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.842 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.101 { 00:16:09.101 "cntlid": 39, 00:16:09.101 "qid": 0, 00:16:09.101 "state": "enabled", 00:16:09.101 "thread": "nvmf_tgt_poll_group_000", 00:16:09.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.101 "listen_address": { 00:16:09.101 "trtype": "TCP", 00:16:09.101 "adrfam": "IPv4", 00:16:09.101 "traddr": "10.0.0.2", 00:16:09.101 "trsvcid": "4420" 00:16:09.101 }, 00:16:09.101 "peer_address": { 00:16:09.101 "trtype": "TCP", 00:16:09.101 "adrfam": "IPv4", 00:16:09.101 "traddr": "10.0.0.1", 00:16:09.101 "trsvcid": "52900" 00:16:09.101 }, 00:16:09.101 "auth": { 00:16:09.101 "state": "completed", 00:16:09.101 "digest": "sha256", 00:16:09.101 "dhgroup": "ffdhe6144" 00:16:09.101 } 00:16:09.101 } 00:16:09.101 ]' 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.101 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.361 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:09.361 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:09.929 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.929 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.929 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.929 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.929 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.929 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.929 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.929 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:09.929 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.188 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:10.188 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.188 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.188 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.188 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.188 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.188 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.188 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.188 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.188 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.188 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.189 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.189 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.757 00:16:10.757 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.757 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.757 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.016 { 00:16:11.016 "cntlid": 41, 00:16:11.016 "qid": 0, 00:16:11.016 "state": "enabled", 00:16:11.016 "thread": "nvmf_tgt_poll_group_000", 00:16:11.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:11.016 "listen_address": { 00:16:11.016 "trtype": "TCP", 00:16:11.016 "adrfam": "IPv4", 00:16:11.016 "traddr": "10.0.0.2", 00:16:11.016 "trsvcid": "4420" 00:16:11.016 }, 00:16:11.016 "peer_address": { 00:16:11.016 "trtype": "TCP", 00:16:11.016 "adrfam": "IPv4", 00:16:11.016 "traddr": "10.0.0.1", 00:16:11.016 "trsvcid": "39888" 00:16:11.016 }, 00:16:11.016 "auth": { 00:16:11.016 "state": "completed", 00:16:11.016 "digest": "sha256", 00:16:11.016 "dhgroup": "ffdhe8192" 00:16:11.016 } 00:16:11.016 } 00:16:11.016 ]' 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.016 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.275 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:11.275 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:11.844 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.844 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.844 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.844 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.844 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.844 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.844 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.844 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.104 11:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.673 00:16:12.673 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.673 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.673 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.673 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.673 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.673 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.673 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.673 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.673 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.673 { 00:16:12.673 "cntlid": 43, 00:16:12.673 "qid": 0, 00:16:12.673 "state": "enabled", 00:16:12.673 "thread": "nvmf_tgt_poll_group_000", 00:16:12.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.673 "listen_address": { 00:16:12.673 "trtype": "TCP", 00:16:12.673 "adrfam": "IPv4", 00:16:12.673 "traddr": "10.0.0.2", 00:16:12.673 "trsvcid": "4420" 00:16:12.673 }, 00:16:12.673 "peer_address": { 00:16:12.673 "trtype": "TCP", 00:16:12.673 "adrfam": "IPv4", 00:16:12.673 "traddr": "10.0.0.1", 00:16:12.673 "trsvcid": "39918" 00:16:12.673 }, 00:16:12.673 "auth": { 00:16:12.673 "state": "completed", 00:16:12.673 "digest": "sha256", 00:16:12.673 "dhgroup": "ffdhe8192" 00:16:12.673 } 00:16:12.673 } 00:16:12.673 ]' 00:16:12.673 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.932 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.932 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.932 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.932 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.932 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.932 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.932 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.191 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:13.192 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.760 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.019 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.019 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.019 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.019 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.278 00:16:14.278 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.278 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.278 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.538 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.538 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.538 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.538 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.538 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.538 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.538 { 00:16:14.538 "cntlid": 45, 00:16:14.538 "qid": 0, 00:16:14.538 "state": "enabled", 00:16:14.538 "thread": "nvmf_tgt_poll_group_000", 00:16:14.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.538 "listen_address": { 00:16:14.538 "trtype": "TCP", 00:16:14.538 "adrfam": "IPv4", 00:16:14.538 "traddr": "10.0.0.2", 00:16:14.538 "trsvcid": "4420" 00:16:14.538 }, 00:16:14.538 "peer_address": { 00:16:14.538 "trtype": "TCP", 00:16:14.538 "adrfam": "IPv4", 00:16:14.538 "traddr": "10.0.0.1", 00:16:14.538 "trsvcid": "39948" 00:16:14.538 }, 00:16:14.538 "auth": { 00:16:14.538 "state": "completed", 00:16:14.538 "digest": "sha256", 00:16:14.538 "dhgroup": "ffdhe8192" 00:16:14.538 } 00:16:14.538 } 00:16:14.538 ]' 00:16:14.538 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.538 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.538 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.799 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.799 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.799 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.799 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.799 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.058 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:15.058 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.625 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.193 00:16:16.193 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.193 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.193 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.452 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.452 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.452 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.452 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.452 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.452 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.452 { 00:16:16.452 "cntlid": 47, 00:16:16.452 "qid": 0, 00:16:16.452 "state": "enabled", 00:16:16.452 "thread": "nvmf_tgt_poll_group_000", 00:16:16.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:16.452 "listen_address": { 00:16:16.452 "trtype": "TCP", 00:16:16.452 "adrfam": "IPv4", 00:16:16.452 "traddr": "10.0.0.2", 00:16:16.452 "trsvcid": "4420" 00:16:16.452 }, 00:16:16.452 "peer_address": { 00:16:16.452 "trtype": "TCP", 00:16:16.452 "adrfam": "IPv4", 00:16:16.452 "traddr": "10.0.0.1", 00:16:16.452 "trsvcid": "39968" 00:16:16.452 }, 00:16:16.452 "auth": { 00:16:16.452 "state": "completed", 00:16:16.452 "digest": "sha256", 00:16:16.452 "dhgroup": "ffdhe8192" 00:16:16.452 } 00:16:16.452 } 00:16:16.452 ]' 00:16:16.452 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.452 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.452 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.452 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.452 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.711 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.711 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.711 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.711 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:16.711 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:17.278 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.278 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.278 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.278 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.279 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.279 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:17.279 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.279 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.279 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.279 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.538 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.798 00:16:17.798 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.798 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.798 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.057 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.057 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.057 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.057 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.057 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.057 { 00:16:18.057 "cntlid": 49, 00:16:18.057 "qid": 0, 00:16:18.057 "state": "enabled", 00:16:18.057 "thread": "nvmf_tgt_poll_group_000", 00:16:18.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:18.057 "listen_address": { 00:16:18.057 "trtype": "TCP", 00:16:18.057 "adrfam": "IPv4", 00:16:18.057 "traddr": "10.0.0.2", 00:16:18.057 "trsvcid": "4420" 00:16:18.057 }, 00:16:18.057 "peer_address": { 00:16:18.057 "trtype": "TCP", 00:16:18.057 "adrfam": "IPv4", 00:16:18.057 "traddr": "10.0.0.1", 00:16:18.057 "trsvcid": "40000" 00:16:18.057 }, 00:16:18.057 "auth": { 00:16:18.057 "state": "completed", 00:16:18.057 "digest": "sha384", 00:16:18.057 "dhgroup": "null" 00:16:18.057 } 00:16:18.057 } 00:16:18.058 ]' 00:16:18.058 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.058 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.058 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.058 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.058 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.058 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.058 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.058 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.317 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:18.317 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:18.886 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.886 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.886 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.886 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.886 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.886 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.886 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.886 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:19.145 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.146 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.405 00:16:19.405 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.405 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.405 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.663 { 00:16:19.663 "cntlid": 51, 00:16:19.663 "qid": 0, 00:16:19.663 "state": "enabled", 00:16:19.663 "thread": "nvmf_tgt_poll_group_000", 00:16:19.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.663 "listen_address": { 00:16:19.663 "trtype": "TCP", 00:16:19.663 "adrfam": "IPv4", 00:16:19.663 "traddr": "10.0.0.2", 00:16:19.663 "trsvcid": "4420" 00:16:19.663 }, 00:16:19.663 "peer_address": { 00:16:19.663 "trtype": "TCP", 00:16:19.663 "adrfam": "IPv4", 00:16:19.663 "traddr": "10.0.0.1", 00:16:19.663 "trsvcid": "40032" 00:16:19.663 }, 00:16:19.663 "auth": { 00:16:19.663 "state": "completed", 00:16:19.663 "digest": "sha384", 00:16:19.663 "dhgroup": "null" 00:16:19.663 } 00:16:19.663 } 00:16:19.663 ]' 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.663 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.922 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:19.922 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:20.491 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.491 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.491 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.491 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.491 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.491 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.491 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.491 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.750 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.009 00:16:21.010 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.010 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.010 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.269 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.269 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.269 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.269 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.269 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.269 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.269 { 00:16:21.269 "cntlid": 53, 00:16:21.269 "qid": 0, 00:16:21.269 "state": "enabled", 00:16:21.269 "thread": "nvmf_tgt_poll_group_000", 00:16:21.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.269 "listen_address": { 00:16:21.269 "trtype": "TCP", 00:16:21.269 "adrfam": "IPv4", 00:16:21.269 "traddr": "10.0.0.2", 00:16:21.269 "trsvcid": "4420" 00:16:21.269 }, 00:16:21.269 "peer_address": { 00:16:21.269 "trtype": "TCP", 00:16:21.269 "adrfam": "IPv4", 00:16:21.269 "traddr": "10.0.0.1", 00:16:21.269 "trsvcid": "58754" 00:16:21.269 }, 00:16:21.269 "auth": { 00:16:21.269 "state": "completed", 00:16:21.269 "digest": "sha384", 00:16:21.269 "dhgroup": "null" 00:16:21.269 } 00:16:21.269 } 00:16:21.269 ]' 00:16:21.269 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.269 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.269 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.269 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.269 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.528 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.528 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.528 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.528 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:21.528 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:22.096 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.096 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.096 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.096 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.096 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.096 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.096 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:22.096 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.355 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.614 00:16:22.614 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.614 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.614 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.873 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.873 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.873 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.874 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.874 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.874 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.874 { 00:16:22.874 "cntlid": 55, 00:16:22.874 "qid": 0, 00:16:22.874 "state": "enabled", 00:16:22.874 "thread": "nvmf_tgt_poll_group_000", 00:16:22.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.874 "listen_address": { 00:16:22.874 "trtype": "TCP", 00:16:22.874 "adrfam": "IPv4", 00:16:22.874 "traddr": "10.0.0.2", 00:16:22.874 "trsvcid": "4420" 00:16:22.874 }, 00:16:22.874 "peer_address": { 00:16:22.874 "trtype": "TCP", 00:16:22.874 "adrfam": "IPv4", 00:16:22.874 "traddr": "10.0.0.1", 00:16:22.874 "trsvcid": "58784" 00:16:22.874 }, 00:16:22.874 "auth": { 00:16:22.874 "state": "completed", 00:16:22.874 "digest": "sha384", 00:16:22.874 "dhgroup": "null" 00:16:22.874 } 00:16:22.874 } 00:16:22.874 ]' 00:16:22.874 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.874 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.874 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.874 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.874 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.874 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.874 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.874 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.133 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:23.133 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:23.701 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.701 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.701 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.701 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.701 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.701 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.701 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.701 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:23.701 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.960 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.219 00:16:24.219 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.219 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.219 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.479 { 00:16:24.479 "cntlid": 57, 00:16:24.479 "qid": 0, 00:16:24.479 "state": "enabled", 00:16:24.479 "thread": "nvmf_tgt_poll_group_000", 00:16:24.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.479 "listen_address": { 00:16:24.479 "trtype": "TCP", 00:16:24.479 "adrfam": "IPv4", 00:16:24.479 "traddr": "10.0.0.2", 00:16:24.479 "trsvcid": "4420" 00:16:24.479 }, 00:16:24.479 "peer_address": { 00:16:24.479 "trtype": "TCP", 00:16:24.479 "adrfam": "IPv4", 00:16:24.479 "traddr": "10.0.0.1", 00:16:24.479 "trsvcid": "58812" 00:16:24.479 }, 00:16:24.479 "auth": { 00:16:24.479 "state": "completed", 00:16:24.479 "digest": "sha384", 00:16:24.479 "dhgroup": "ffdhe2048" 00:16:24.479 } 00:16:24.479 } 00:16:24.479 ]' 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.479 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.738 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:24.738 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:25.306 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.306 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.306 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.306 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.306 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.306 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.306 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.306 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.566 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:25.566 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.566 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.566 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.566 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.566 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.566 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.566 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.567 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.567 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.567 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.567 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.567 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.831 00:16:25.831 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.831 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.831 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.090 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.090 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.090 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.090 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.090 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.090 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.090 { 00:16:26.090 "cntlid": 59, 00:16:26.090 "qid": 0, 00:16:26.090 "state": "enabled", 00:16:26.090 "thread": "nvmf_tgt_poll_group_000", 00:16:26.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.090 "listen_address": { 00:16:26.090 "trtype": "TCP", 00:16:26.090 "adrfam": "IPv4", 00:16:26.090 "traddr": "10.0.0.2", 00:16:26.090 "trsvcid": "4420" 00:16:26.090 }, 00:16:26.090 "peer_address": { 00:16:26.090 "trtype": "TCP", 00:16:26.090 "adrfam": "IPv4", 00:16:26.090 "traddr": "10.0.0.1", 00:16:26.090 "trsvcid": "58834" 00:16:26.090 }, 00:16:26.090 "auth": { 00:16:26.090 "state": "completed", 00:16:26.090 "digest": "sha384", 00:16:26.090 "dhgroup": "ffdhe2048" 00:16:26.090 } 00:16:26.090 } 00:16:26.091 ]' 00:16:26.091 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.091 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.091 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.091 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.091 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.091 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.091 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.091 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.350 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:26.350 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:26.918 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.918 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.918 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.918 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.918 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.918 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.918 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:26.918 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.177 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.437 00:16:27.437 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.437 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.437 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.696 { 00:16:27.696 "cntlid": 61, 00:16:27.696 "qid": 0, 00:16:27.696 "state": "enabled", 00:16:27.696 "thread": "nvmf_tgt_poll_group_000", 00:16:27.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.696 "listen_address": { 00:16:27.696 "trtype": "TCP", 00:16:27.696 "adrfam": "IPv4", 00:16:27.696 "traddr": "10.0.0.2", 00:16:27.696 "trsvcid": "4420" 00:16:27.696 }, 00:16:27.696 "peer_address": { 00:16:27.696 "trtype": "TCP", 00:16:27.696 "adrfam": "IPv4", 00:16:27.696 "traddr": "10.0.0.1", 00:16:27.696 "trsvcid": "58848" 00:16:27.696 }, 00:16:27.696 "auth": { 00:16:27.696 "state": "completed", 00:16:27.696 "digest": "sha384", 00:16:27.696 "dhgroup": "ffdhe2048" 00:16:27.696 } 00:16:27.696 } 00:16:27.696 ]' 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.696 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.955 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:27.955 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:28.524 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.524 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.524 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.524 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.524 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.524 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.524 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.524 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.784 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.043 00:16:29.043 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.043 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.043 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.303 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.303 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.303 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.303 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.303 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.303 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.303 { 00:16:29.303 "cntlid": 63, 00:16:29.303 "qid": 0, 00:16:29.303 "state": "enabled", 00:16:29.303 "thread": "nvmf_tgt_poll_group_000", 00:16:29.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.303 "listen_address": { 00:16:29.303 "trtype": "TCP", 00:16:29.303 "adrfam": "IPv4", 00:16:29.303 "traddr": "10.0.0.2", 00:16:29.303 "trsvcid": "4420" 00:16:29.303 }, 00:16:29.303 "peer_address": { 00:16:29.303 "trtype": "TCP", 00:16:29.303 "adrfam": "IPv4", 00:16:29.303 "traddr": "10.0.0.1", 00:16:29.303 "trsvcid": "58884" 00:16:29.303 }, 00:16:29.303 "auth": { 00:16:29.303 "state": "completed", 00:16:29.303 "digest": "sha384", 00:16:29.303 "dhgroup": "ffdhe2048" 00:16:29.303 } 00:16:29.303 } 00:16:29.303 ]' 00:16:29.303 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.303 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.303 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.303 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.303 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.303 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.303 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.303 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.563 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:29.563 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:30.132 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.132 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.132 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.132 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.132 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.132 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.132 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.132 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:30.132 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.391 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.392 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.651 00:16:30.651 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.651 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.651 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.910 { 00:16:30.910 "cntlid": 65, 00:16:30.910 "qid": 0, 00:16:30.910 "state": "enabled", 00:16:30.910 "thread": "nvmf_tgt_poll_group_000", 00:16:30.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.910 "listen_address": { 00:16:30.910 "trtype": "TCP", 00:16:30.910 "adrfam": "IPv4", 00:16:30.910 "traddr": "10.0.0.2", 00:16:30.910 "trsvcid": "4420" 00:16:30.910 }, 00:16:30.910 "peer_address": { 00:16:30.910 "trtype": "TCP", 00:16:30.910 "adrfam": "IPv4", 00:16:30.910 "traddr": "10.0.0.1", 00:16:30.910 "trsvcid": "50060" 00:16:30.910 }, 00:16:30.910 "auth": { 00:16:30.910 "state": "completed", 00:16:30.910 "digest": "sha384", 00:16:30.910 "dhgroup": "ffdhe3072" 00:16:30.910 } 00:16:30.910 } 00:16:30.910 ]' 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.910 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.170 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:31.170 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:31.738 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.738 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.738 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.738 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.738 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.738 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.738 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:31.738 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.997 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.256 00:16:32.256 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.256 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.256 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.516 { 00:16:32.516 "cntlid": 67, 00:16:32.516 "qid": 0, 00:16:32.516 "state": "enabled", 00:16:32.516 "thread": "nvmf_tgt_poll_group_000", 00:16:32.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:32.516 "listen_address": { 00:16:32.516 "trtype": "TCP", 00:16:32.516 "adrfam": "IPv4", 00:16:32.516 "traddr": "10.0.0.2", 00:16:32.516 "trsvcid": "4420" 00:16:32.516 }, 00:16:32.516 "peer_address": { 00:16:32.516 "trtype": "TCP", 00:16:32.516 "adrfam": "IPv4", 00:16:32.516 "traddr": "10.0.0.1", 00:16:32.516 "trsvcid": "50100" 00:16:32.516 }, 00:16:32.516 "auth": { 00:16:32.516 "state": "completed", 00:16:32.516 "digest": "sha384", 00:16:32.516 "dhgroup": "ffdhe3072" 00:16:32.516 } 00:16:32.516 } 00:16:32.516 ]' 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.516 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.775 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:32.775 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:33.342 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.342 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.342 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.342 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.342 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.342 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.343 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.343 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.602 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.861 00:16:33.861 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.861 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.861 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.159 { 00:16:34.159 "cntlid": 69, 00:16:34.159 "qid": 0, 00:16:34.159 "state": "enabled", 00:16:34.159 "thread": "nvmf_tgt_poll_group_000", 00:16:34.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.159 "listen_address": { 00:16:34.159 "trtype": "TCP", 00:16:34.159 "adrfam": "IPv4", 00:16:34.159 "traddr": "10.0.0.2", 00:16:34.159 "trsvcid": "4420" 00:16:34.159 }, 00:16:34.159 "peer_address": { 00:16:34.159 "trtype": "TCP", 00:16:34.159 "adrfam": "IPv4", 00:16:34.159 "traddr": "10.0.0.1", 00:16:34.159 "trsvcid": "50132" 00:16:34.159 }, 00:16:34.159 "auth": { 00:16:34.159 "state": "completed", 00:16:34.159 "digest": "sha384", 00:16:34.159 "dhgroup": "ffdhe3072" 00:16:34.159 } 00:16:34.159 } 00:16:34.159 ]' 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.159 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.473 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:34.473 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.051 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.052 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.052 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:35.052 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.052 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.052 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.052 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.052 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.052 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.311 00:16:35.311 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.311 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.311 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.569 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.569 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.569 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.569 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.569 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.569 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.569 { 00:16:35.569 "cntlid": 71, 00:16:35.569 "qid": 0, 00:16:35.569 "state": "enabled", 00:16:35.569 "thread": "nvmf_tgt_poll_group_000", 00:16:35.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.569 "listen_address": { 00:16:35.569 "trtype": "TCP", 00:16:35.569 "adrfam": "IPv4", 00:16:35.569 "traddr": "10.0.0.2", 00:16:35.569 "trsvcid": "4420" 00:16:35.569 }, 00:16:35.569 "peer_address": { 00:16:35.569 "trtype": "TCP", 00:16:35.569 "adrfam": "IPv4", 00:16:35.569 "traddr": "10.0.0.1", 00:16:35.569 "trsvcid": "50168" 00:16:35.569 }, 00:16:35.569 "auth": { 00:16:35.569 "state": "completed", 00:16:35.569 "digest": "sha384", 00:16:35.569 "dhgroup": "ffdhe3072" 00:16:35.569 } 00:16:35.569 } 00:16:35.569 ]' 00:16:35.569 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.569 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.569 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.828 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.828 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.828 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.828 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.828 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.828 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:35.828 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:36.405 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.405 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.405 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.405 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.669 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.928 00:16:36.928 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.928 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.928 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.187 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.187 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.187 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.187 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.187 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.187 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.187 { 00:16:37.187 "cntlid": 73, 00:16:37.187 "qid": 0, 00:16:37.187 "state": "enabled", 00:16:37.187 "thread": "nvmf_tgt_poll_group_000", 00:16:37.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.188 "listen_address": { 00:16:37.188 "trtype": "TCP", 00:16:37.188 "adrfam": "IPv4", 00:16:37.188 "traddr": "10.0.0.2", 00:16:37.188 "trsvcid": "4420" 00:16:37.188 }, 00:16:37.188 "peer_address": { 00:16:37.188 "trtype": "TCP", 00:16:37.188 "adrfam": "IPv4", 00:16:37.188 "traddr": "10.0.0.1", 00:16:37.188 "trsvcid": "50188" 00:16:37.188 }, 00:16:37.188 "auth": { 00:16:37.188 "state": "completed", 00:16:37.188 "digest": "sha384", 00:16:37.188 "dhgroup": "ffdhe4096" 00:16:37.188 } 00:16:37.188 } 00:16:37.188 ]' 00:16:37.188 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.188 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.188 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.446 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.446 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.446 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.446 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.446 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.707 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:37.707 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:38.275 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.275 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.275 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.275 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.275 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.275 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.275 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.275 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.275 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.534 00:16:38.793 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.793 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.793 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.793 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.793 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.793 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.793 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.793 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.793 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.793 { 00:16:38.793 "cntlid": 75, 00:16:38.793 "qid": 0, 00:16:38.793 "state": "enabled", 00:16:38.793 "thread": "nvmf_tgt_poll_group_000", 00:16:38.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.793 "listen_address": { 00:16:38.793 "trtype": "TCP", 00:16:38.793 "adrfam": "IPv4", 00:16:38.793 "traddr": "10.0.0.2", 00:16:38.793 "trsvcid": "4420" 00:16:38.793 }, 00:16:38.793 "peer_address": { 00:16:38.793 "trtype": "TCP", 00:16:38.793 "adrfam": "IPv4", 00:16:38.793 "traddr": "10.0.0.1", 00:16:38.793 "trsvcid": "50200" 00:16:38.793 }, 00:16:38.793 "auth": { 00:16:38.793 "state": "completed", 00:16:38.793 "digest": "sha384", 00:16:38.793 "dhgroup": "ffdhe4096" 00:16:38.793 } 00:16:38.793 } 00:16:38.793 ]' 00:16:38.793 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.053 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.053 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.053 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.053 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.053 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.053 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.053 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.312 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:39.312 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.880 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.139 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.398 00:16:40.398 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.398 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.398 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.398 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.398 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.398 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.398 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.398 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.398 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.398 { 00:16:40.398 "cntlid": 77, 00:16:40.398 "qid": 0, 00:16:40.398 "state": "enabled", 00:16:40.398 "thread": "nvmf_tgt_poll_group_000", 00:16:40.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.399 "listen_address": { 00:16:40.399 "trtype": "TCP", 00:16:40.399 "adrfam": "IPv4", 00:16:40.399 "traddr": "10.0.0.2", 00:16:40.399 "trsvcid": "4420" 00:16:40.399 }, 00:16:40.399 "peer_address": { 00:16:40.399 "trtype": "TCP", 00:16:40.399 "adrfam": "IPv4", 00:16:40.399 "traddr": "10.0.0.1", 00:16:40.399 "trsvcid": "49922" 00:16:40.399 }, 00:16:40.399 "auth": { 00:16:40.399 "state": "completed", 00:16:40.399 "digest": "sha384", 00:16:40.399 "dhgroup": "ffdhe4096" 00:16:40.399 } 00:16:40.399 } 00:16:40.399 ]' 00:16:40.399 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.658 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.658 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.658 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.658 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.658 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.658 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.658 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.916 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:40.916 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:41.484 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.485 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.485 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.485 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.485 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.485 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.485 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.485 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.485 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.744 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.003 00:16:42.003 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.003 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.003 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.003 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.003 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.003 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.003 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.003 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.003 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.003 { 00:16:42.003 "cntlid": 79, 00:16:42.003 "qid": 0, 00:16:42.003 "state": "enabled", 00:16:42.003 "thread": "nvmf_tgt_poll_group_000", 00:16:42.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.003 "listen_address": { 00:16:42.003 "trtype": "TCP", 00:16:42.003 "adrfam": "IPv4", 00:16:42.003 "traddr": "10.0.0.2", 00:16:42.003 "trsvcid": "4420" 00:16:42.003 }, 00:16:42.004 "peer_address": { 00:16:42.004 "trtype": "TCP", 00:16:42.004 "adrfam": "IPv4", 00:16:42.004 "traddr": "10.0.0.1", 00:16:42.004 "trsvcid": "49938" 00:16:42.004 }, 00:16:42.004 "auth": { 00:16:42.004 "state": "completed", 00:16:42.004 "digest": "sha384", 00:16:42.004 "dhgroup": "ffdhe4096" 00:16:42.004 } 00:16:42.004 } 00:16:42.004 ]' 00:16:42.004 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.262 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.262 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.262 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.262 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.262 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.262 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.263 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.520 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:42.520 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:43.087 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.087 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.087 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.087 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.087 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.087 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.087 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.087 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.087 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.346 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.605 00:16:43.605 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.605 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.605 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.865 { 00:16:43.865 "cntlid": 81, 00:16:43.865 "qid": 0, 00:16:43.865 "state": "enabled", 00:16:43.865 "thread": "nvmf_tgt_poll_group_000", 00:16:43.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.865 "listen_address": { 00:16:43.865 "trtype": "TCP", 00:16:43.865 "adrfam": "IPv4", 00:16:43.865 "traddr": "10.0.0.2", 00:16:43.865 "trsvcid": "4420" 00:16:43.865 }, 00:16:43.865 "peer_address": { 00:16:43.865 "trtype": "TCP", 00:16:43.865 "adrfam": "IPv4", 00:16:43.865 "traddr": "10.0.0.1", 00:16:43.865 "trsvcid": "49980" 00:16:43.865 }, 00:16:43.865 "auth": { 00:16:43.865 "state": "completed", 00:16:43.865 "digest": "sha384", 00:16:43.865 "dhgroup": "ffdhe6144" 00:16:43.865 } 00:16:43.865 } 00:16:43.865 ]' 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.865 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.124 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:44.124 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:44.691 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.691 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.691 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.691 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.691 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.691 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.691 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:44.691 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.949 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.208 00:16:45.208 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.208 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.208 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.466 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.466 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.466 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.466 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.466 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.466 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.466 { 00:16:45.466 "cntlid": 83, 00:16:45.466 "qid": 0, 00:16:45.466 "state": "enabled", 00:16:45.466 "thread": "nvmf_tgt_poll_group_000", 00:16:45.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.466 "listen_address": { 00:16:45.466 "trtype": "TCP", 00:16:45.466 "adrfam": "IPv4", 00:16:45.466 "traddr": "10.0.0.2", 00:16:45.466 "trsvcid": "4420" 00:16:45.466 }, 00:16:45.466 "peer_address": { 00:16:45.466 "trtype": "TCP", 00:16:45.466 "adrfam": "IPv4", 00:16:45.466 "traddr": "10.0.0.1", 00:16:45.466 "trsvcid": "50014" 00:16:45.466 }, 00:16:45.466 "auth": { 00:16:45.466 "state": "completed", 00:16:45.466 "digest": "sha384", 00:16:45.466 "dhgroup": "ffdhe6144" 00:16:45.466 } 00:16:45.466 } 00:16:45.466 ]' 00:16:45.466 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.467 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.467 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.467 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.467 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.725 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.725 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.725 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.725 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:45.725 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:46.291 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.291 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.291 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.291 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.291 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.291 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.291 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.291 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.550 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.118 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.118 { 00:16:47.118 "cntlid": 85, 00:16:47.118 "qid": 0, 00:16:47.118 "state": "enabled", 00:16:47.118 "thread": "nvmf_tgt_poll_group_000", 00:16:47.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.118 "listen_address": { 00:16:47.118 "trtype": "TCP", 00:16:47.118 "adrfam": "IPv4", 00:16:47.118 "traddr": "10.0.0.2", 00:16:47.118 "trsvcid": "4420" 00:16:47.118 }, 00:16:47.118 "peer_address": { 00:16:47.118 "trtype": "TCP", 00:16:47.118 "adrfam": "IPv4", 00:16:47.118 "traddr": "10.0.0.1", 00:16:47.118 "trsvcid": "50050" 00:16:47.118 }, 00:16:47.118 "auth": { 00:16:47.118 "state": "completed", 00:16:47.118 "digest": "sha384", 00:16:47.118 "dhgroup": "ffdhe6144" 00:16:47.118 } 00:16:47.118 } 00:16:47.118 ]' 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.118 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.377 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.377 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.377 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.377 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.377 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.635 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:47.635 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.203 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.770 00:16:48.770 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.770 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.770 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.770 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.770 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.770 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.770 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.770 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.770 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.770 { 00:16:48.770 "cntlid": 87, 00:16:48.770 "qid": 0, 00:16:48.770 "state": "enabled", 00:16:48.770 "thread": "nvmf_tgt_poll_group_000", 00:16:48.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.770 "listen_address": { 00:16:48.770 "trtype": "TCP", 00:16:48.770 "adrfam": "IPv4", 00:16:48.770 "traddr": "10.0.0.2", 00:16:48.770 "trsvcid": "4420" 00:16:48.770 }, 00:16:48.770 "peer_address": { 00:16:48.770 "trtype": "TCP", 00:16:48.770 "adrfam": "IPv4", 00:16:48.770 "traddr": "10.0.0.1", 00:16:48.770 "trsvcid": "50078" 00:16:48.770 }, 00:16:48.770 "auth": { 00:16:48.770 "state": "completed", 00:16:48.770 "digest": "sha384", 00:16:48.770 "dhgroup": "ffdhe6144" 00:16:48.770 } 00:16:48.770 } 00:16:48.770 ]' 00:16:49.029 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.029 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.029 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.029 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.029 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.029 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.029 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.029 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.287 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:49.287 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:49.856 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.856 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.856 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.856 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.856 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.856 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.856 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.856 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:49.856 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.115 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.374 00:16:50.635 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.635 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.635 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.636 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.636 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.636 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.636 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.636 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.636 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.636 { 00:16:50.636 "cntlid": 89, 00:16:50.636 "qid": 0, 00:16:50.636 "state": "enabled", 00:16:50.636 "thread": "nvmf_tgt_poll_group_000", 00:16:50.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.636 "listen_address": { 00:16:50.636 "trtype": "TCP", 00:16:50.636 "adrfam": "IPv4", 00:16:50.636 "traddr": "10.0.0.2", 00:16:50.636 "trsvcid": "4420" 00:16:50.636 }, 00:16:50.636 "peer_address": { 00:16:50.636 "trtype": "TCP", 00:16:50.636 "adrfam": "IPv4", 00:16:50.636 "traddr": "10.0.0.1", 00:16:50.636 "trsvcid": "53724" 00:16:50.636 }, 00:16:50.636 "auth": { 00:16:50.636 "state": "completed", 00:16:50.636 "digest": "sha384", 00:16:50.636 "dhgroup": "ffdhe8192" 00:16:50.636 } 00:16:50.636 } 00:16:50.636 ]' 00:16:50.636 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.895 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.895 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.895 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.895 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.895 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.895 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.895 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.154 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:51.154 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:51.722 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.723 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.723 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.723 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.723 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.982 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.982 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.982 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.982 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.240 00:16:52.240 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.240 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.240 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.499 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.499 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.499 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.499 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.499 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.499 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.499 { 00:16:52.499 "cntlid": 91, 00:16:52.499 "qid": 0, 00:16:52.499 "state": "enabled", 00:16:52.499 "thread": "nvmf_tgt_poll_group_000", 00:16:52.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.499 "listen_address": { 00:16:52.499 "trtype": "TCP", 00:16:52.499 "adrfam": "IPv4", 00:16:52.499 "traddr": "10.0.0.2", 00:16:52.499 "trsvcid": "4420" 00:16:52.499 }, 00:16:52.499 "peer_address": { 00:16:52.499 "trtype": "TCP", 00:16:52.499 "adrfam": "IPv4", 00:16:52.499 "traddr": "10.0.0.1", 00:16:52.499 "trsvcid": "53754" 00:16:52.499 }, 00:16:52.499 "auth": { 00:16:52.499 "state": "completed", 00:16:52.499 "digest": "sha384", 00:16:52.499 "dhgroup": "ffdhe8192" 00:16:52.499 } 00:16:52.499 } 00:16:52.499 ]' 00:16:52.499 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.499 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.499 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.758 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.758 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.758 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.758 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.758 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.758 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:52.758 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:53.332 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.332 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.332 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.332 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.332 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.332 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.332 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:53.332 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:53.590 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:53.590 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.590 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.590 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.590 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.590 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.591 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.591 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.591 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.591 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.591 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.591 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.591 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.157 00:16:54.157 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.157 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.157 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.416 { 00:16:54.416 "cntlid": 93, 00:16:54.416 "qid": 0, 00:16:54.416 "state": "enabled", 00:16:54.416 "thread": "nvmf_tgt_poll_group_000", 00:16:54.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.416 "listen_address": { 00:16:54.416 "trtype": "TCP", 00:16:54.416 "adrfam": "IPv4", 00:16:54.416 "traddr": "10.0.0.2", 00:16:54.416 "trsvcid": "4420" 00:16:54.416 }, 00:16:54.416 "peer_address": { 00:16:54.416 "trtype": "TCP", 00:16:54.416 "adrfam": "IPv4", 00:16:54.416 "traddr": "10.0.0.1", 00:16:54.416 "trsvcid": "53796" 00:16:54.416 }, 00:16:54.416 "auth": { 00:16:54.416 "state": "completed", 00:16:54.416 "digest": "sha384", 00:16:54.416 "dhgroup": "ffdhe8192" 00:16:54.416 } 00:16:54.416 } 00:16:54.416 ]' 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.416 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.675 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:54.675 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:16:55.243 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.243 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.243 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.243 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.243 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.243 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.243 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.243 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.502 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.068 00:16:56.068 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.068 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.068 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.068 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.068 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.068 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.068 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.327 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.327 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.327 { 00:16:56.327 "cntlid": 95, 00:16:56.327 "qid": 0, 00:16:56.327 "state": "enabled", 00:16:56.327 "thread": "nvmf_tgt_poll_group_000", 00:16:56.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.327 "listen_address": { 00:16:56.327 "trtype": "TCP", 00:16:56.327 "adrfam": "IPv4", 00:16:56.327 "traddr": "10.0.0.2", 00:16:56.327 "trsvcid": "4420" 00:16:56.327 }, 00:16:56.327 "peer_address": { 00:16:56.327 "trtype": "TCP", 00:16:56.327 "adrfam": "IPv4", 00:16:56.327 "traddr": "10.0.0.1", 00:16:56.327 "trsvcid": "53838" 00:16:56.327 }, 00:16:56.327 "auth": { 00:16:56.327 "state": "completed", 00:16:56.327 "digest": "sha384", 00:16:56.327 "dhgroup": "ffdhe8192" 00:16:56.327 } 00:16:56.327 } 00:16:56.327 ]' 00:16:56.327 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.327 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.327 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.327 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.327 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.327 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.327 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.327 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.586 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:56.586 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:16:57.153 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.153 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.153 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.153 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.153 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.153 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:57.153 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.153 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.153 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.153 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.412 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.412 00:16:57.672 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.672 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.672 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.672 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.672 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.672 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.672 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.672 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.672 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.672 { 00:16:57.672 "cntlid": 97, 00:16:57.672 "qid": 0, 00:16:57.672 "state": "enabled", 00:16:57.672 "thread": "nvmf_tgt_poll_group_000", 00:16:57.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.672 "listen_address": { 00:16:57.672 "trtype": "TCP", 00:16:57.672 "adrfam": "IPv4", 00:16:57.672 "traddr": "10.0.0.2", 00:16:57.672 "trsvcid": "4420" 00:16:57.672 }, 00:16:57.672 "peer_address": { 00:16:57.672 "trtype": "TCP", 00:16:57.672 "adrfam": "IPv4", 00:16:57.672 "traddr": "10.0.0.1", 00:16:57.672 "trsvcid": "53860" 00:16:57.672 }, 00:16:57.672 "auth": { 00:16:57.672 "state": "completed", 00:16:57.672 "digest": "sha512", 00:16:57.672 "dhgroup": "null" 00:16:57.672 } 00:16:57.672 } 00:16:57.672 ]' 00:16:57.672 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.931 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.931 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.931 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:57.931 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.931 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.931 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.931 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:58.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:16:58.757 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.757 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.757 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.757 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.757 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.757 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.757 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:58.757 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.016 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.274 00:16:59.274 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.274 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.274 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.274 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.274 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.274 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.274 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.274 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.274 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.274 { 00:16:59.274 "cntlid": 99, 00:16:59.274 "qid": 0, 00:16:59.274 "state": "enabled", 00:16:59.274 "thread": "nvmf_tgt_poll_group_000", 00:16:59.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:59.274 "listen_address": { 00:16:59.274 "trtype": "TCP", 00:16:59.274 "adrfam": "IPv4", 00:16:59.274 "traddr": "10.0.0.2", 00:16:59.274 "trsvcid": "4420" 00:16:59.274 }, 00:16:59.274 "peer_address": { 00:16:59.274 "trtype": "TCP", 00:16:59.274 "adrfam": "IPv4", 00:16:59.274 "traddr": "10.0.0.1", 00:16:59.274 "trsvcid": "53890" 00:16:59.274 }, 00:16:59.274 "auth": { 00:16:59.274 "state": "completed", 00:16:59.274 "digest": "sha512", 00:16:59.274 "dhgroup": "null" 00:16:59.274 } 00:16:59.274 } 00:16:59.274 ]' 00:16:59.274 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.533 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.533 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.533 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:59.533 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.533 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.533 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.533 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.793 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:16:59.793 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:17:00.361 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.361 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.361 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.361 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.361 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.361 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.361 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.361 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.361 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.620 00:17:00.620 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.620 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.620 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.879 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.879 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.879 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.879 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.879 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.879 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.879 { 00:17:00.879 "cntlid": 101, 00:17:00.879 "qid": 0, 00:17:00.879 "state": "enabled", 00:17:00.879 "thread": "nvmf_tgt_poll_group_000", 00:17:00.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.879 "listen_address": { 00:17:00.879 "trtype": "TCP", 00:17:00.879 "adrfam": "IPv4", 00:17:00.879 "traddr": "10.0.0.2", 00:17:00.879 "trsvcid": "4420" 00:17:00.879 }, 00:17:00.879 "peer_address": { 00:17:00.879 "trtype": "TCP", 00:17:00.879 "adrfam": "IPv4", 00:17:00.879 "traddr": "10.0.0.1", 00:17:00.879 "trsvcid": "35514" 00:17:00.879 }, 00:17:00.879 "auth": { 00:17:00.879 "state": "completed", 00:17:00.879 "digest": "sha512", 00:17:00.879 "dhgroup": "null" 00:17:00.879 } 00:17:00.879 } 00:17:00.879 ]' 00:17:00.879 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.879 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.879 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.138 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.138 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.138 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.138 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.138 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.138 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:01.138 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:01.706 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.965 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.224 00:17:02.224 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.224 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.224 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.483 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.483 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.483 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.483 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.483 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.483 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.483 { 00:17:02.483 "cntlid": 103, 00:17:02.483 "qid": 0, 00:17:02.483 "state": "enabled", 00:17:02.483 "thread": "nvmf_tgt_poll_group_000", 00:17:02.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.483 "listen_address": { 00:17:02.483 "trtype": "TCP", 00:17:02.483 "adrfam": "IPv4", 00:17:02.483 "traddr": "10.0.0.2", 00:17:02.483 "trsvcid": "4420" 00:17:02.483 }, 00:17:02.483 "peer_address": { 00:17:02.483 "trtype": "TCP", 00:17:02.483 "adrfam": "IPv4", 00:17:02.483 "traddr": "10.0.0.1", 00:17:02.483 "trsvcid": "35526" 00:17:02.483 }, 00:17:02.483 "auth": { 00:17:02.483 "state": "completed", 00:17:02.483 "digest": "sha512", 00:17:02.483 "dhgroup": "null" 00:17:02.483 } 00:17:02.483 } 00:17:02.483 ]' 00:17:02.483 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.483 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.483 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.483 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:02.483 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.741 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.741 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.741 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.741 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:02.741 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:03.309 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.309 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.309 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.309 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.309 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.309 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.309 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.309 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.309 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.568 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.827 00:17:03.827 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.827 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.827 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.086 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.086 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.086 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.086 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.086 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.086 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.086 { 00:17:04.086 "cntlid": 105, 00:17:04.087 "qid": 0, 00:17:04.087 "state": "enabled", 00:17:04.087 "thread": "nvmf_tgt_poll_group_000", 00:17:04.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.087 "listen_address": { 00:17:04.087 "trtype": "TCP", 00:17:04.087 "adrfam": "IPv4", 00:17:04.087 "traddr": "10.0.0.2", 00:17:04.087 "trsvcid": "4420" 00:17:04.087 }, 00:17:04.087 "peer_address": { 00:17:04.087 "trtype": "TCP", 00:17:04.087 "adrfam": "IPv4", 00:17:04.087 "traddr": "10.0.0.1", 00:17:04.087 "trsvcid": "35556" 00:17:04.087 }, 00:17:04.087 "auth": { 00:17:04.087 "state": "completed", 00:17:04.087 "digest": "sha512", 00:17:04.087 "dhgroup": "ffdhe2048" 00:17:04.087 } 00:17:04.087 } 00:17:04.087 ]' 00:17:04.087 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.087 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.087 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.087 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.087 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.087 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.087 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.087 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.345 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:04.345 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:04.914 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.914 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.914 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.914 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.914 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.914 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.914 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.914 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.173 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.432 00:17:05.432 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.432 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.432 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.690 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.690 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.690 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.690 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.690 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.690 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.690 { 00:17:05.690 "cntlid": 107, 00:17:05.690 "qid": 0, 00:17:05.690 "state": "enabled", 00:17:05.690 "thread": "nvmf_tgt_poll_group_000", 00:17:05.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:05.690 "listen_address": { 00:17:05.690 "trtype": "TCP", 00:17:05.690 "adrfam": "IPv4", 00:17:05.691 "traddr": "10.0.0.2", 00:17:05.691 "trsvcid": "4420" 00:17:05.691 }, 00:17:05.691 "peer_address": { 00:17:05.691 "trtype": "TCP", 00:17:05.691 "adrfam": "IPv4", 00:17:05.691 "traddr": "10.0.0.1", 00:17:05.691 "trsvcid": "35574" 00:17:05.691 }, 00:17:05.691 "auth": { 00:17:05.691 "state": "completed", 00:17:05.691 "digest": "sha512", 00:17:05.691 "dhgroup": "ffdhe2048" 00:17:05.691 } 00:17:05.691 } 00:17:05.691 ]' 00:17:05.691 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.691 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.691 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.691 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.691 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.691 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.691 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.691 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.949 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:17:05.949 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:17:06.518 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.518 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.518 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.518 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.518 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.518 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.518 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:06.518 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.777 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.036 00:17:07.036 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.036 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.036 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.294 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.294 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.294 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.294 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.294 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.294 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.294 { 00:17:07.294 "cntlid": 109, 00:17:07.294 "qid": 0, 00:17:07.294 "state": "enabled", 00:17:07.294 "thread": "nvmf_tgt_poll_group_000", 00:17:07.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.294 "listen_address": { 00:17:07.294 "trtype": "TCP", 00:17:07.294 "adrfam": "IPv4", 00:17:07.294 "traddr": "10.0.0.2", 00:17:07.294 "trsvcid": "4420" 00:17:07.294 }, 00:17:07.294 "peer_address": { 00:17:07.294 "trtype": "TCP", 00:17:07.294 "adrfam": "IPv4", 00:17:07.294 "traddr": "10.0.0.1", 00:17:07.294 "trsvcid": "35594" 00:17:07.294 }, 00:17:07.294 "auth": { 00:17:07.294 "state": "completed", 00:17:07.294 "digest": "sha512", 00:17:07.294 "dhgroup": "ffdhe2048" 00:17:07.294 } 00:17:07.294 } 00:17:07.294 ]' 00:17:07.294 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.294 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.294 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.294 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.294 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.294 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.294 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.294 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.553 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:07.553 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:08.120 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.121 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.121 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.121 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.121 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.121 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.121 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.121 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.379 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:08.379 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.379 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.379 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:08.379 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.379 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.379 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:08.379 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.379 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.379 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.379 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.380 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.380 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.638 00:17:08.638 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.638 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.638 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.897 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.897 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.897 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.897 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.897 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.897 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.897 { 00:17:08.897 "cntlid": 111, 00:17:08.897 "qid": 0, 00:17:08.897 "state": "enabled", 00:17:08.897 "thread": "nvmf_tgt_poll_group_000", 00:17:08.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.897 "listen_address": { 00:17:08.897 "trtype": "TCP", 00:17:08.897 "adrfam": "IPv4", 00:17:08.897 "traddr": "10.0.0.2", 00:17:08.897 "trsvcid": "4420" 00:17:08.897 }, 00:17:08.897 "peer_address": { 00:17:08.897 "trtype": "TCP", 00:17:08.897 "adrfam": "IPv4", 00:17:08.897 "traddr": "10.0.0.1", 00:17:08.897 "trsvcid": "35630" 00:17:08.897 }, 00:17:08.897 "auth": { 00:17:08.897 "state": "completed", 00:17:08.897 "digest": "sha512", 00:17:08.897 "dhgroup": "ffdhe2048" 00:17:08.897 } 00:17:08.897 } 00:17:08.897 ]' 00:17:08.897 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.897 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.897 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.898 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.898 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.898 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.898 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.898 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.156 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:09.157 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:09.724 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.724 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.724 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.724 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.724 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.724 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.724 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.724 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.724 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.983 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.242 00:17:10.242 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.242 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.242 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.501 { 00:17:10.501 "cntlid": 113, 00:17:10.501 "qid": 0, 00:17:10.501 "state": "enabled", 00:17:10.501 "thread": "nvmf_tgt_poll_group_000", 00:17:10.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:10.501 "listen_address": { 00:17:10.501 "trtype": "TCP", 00:17:10.501 "adrfam": "IPv4", 00:17:10.501 "traddr": "10.0.0.2", 00:17:10.501 "trsvcid": "4420" 00:17:10.501 }, 00:17:10.501 "peer_address": { 00:17:10.501 "trtype": "TCP", 00:17:10.501 "adrfam": "IPv4", 00:17:10.501 "traddr": "10.0.0.1", 00:17:10.501 "trsvcid": "41928" 00:17:10.501 }, 00:17:10.501 "auth": { 00:17:10.501 "state": "completed", 00:17:10.501 "digest": "sha512", 00:17:10.501 "dhgroup": "ffdhe3072" 00:17:10.501 } 00:17:10.501 } 00:17:10.501 ]' 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.501 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.760 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:10.760 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:11.327 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.327 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.327 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.327 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.327 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.327 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.327 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.328 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.587 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.848 00:17:11.848 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.848 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.849 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.148 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.148 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.148 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.148 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.148 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.148 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.148 { 00:17:12.148 "cntlid": 115, 00:17:12.148 "qid": 0, 00:17:12.148 "state": "enabled", 00:17:12.148 "thread": "nvmf_tgt_poll_group_000", 00:17:12.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.148 "listen_address": { 00:17:12.148 "trtype": "TCP", 00:17:12.148 "adrfam": "IPv4", 00:17:12.148 "traddr": "10.0.0.2", 00:17:12.148 "trsvcid": "4420" 00:17:12.148 }, 00:17:12.148 "peer_address": { 00:17:12.148 "trtype": "TCP", 00:17:12.148 "adrfam": "IPv4", 00:17:12.148 "traddr": "10.0.0.1", 00:17:12.148 "trsvcid": "41948" 00:17:12.148 }, 00:17:12.148 "auth": { 00:17:12.149 "state": "completed", 00:17:12.149 "digest": "sha512", 00:17:12.149 "dhgroup": "ffdhe3072" 00:17:12.149 } 00:17:12.149 } 00:17:12.149 ]' 00:17:12.149 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.149 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.149 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.149 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.149 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.149 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.149 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.149 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.440 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:17:12.440 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:17:13.038 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.038 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.038 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.038 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.038 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.038 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.038 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.038 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.038 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.297 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.555 00:17:13.555 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.555 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.555 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.555 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.555 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.555 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.555 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.555 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.555 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.555 { 00:17:13.555 "cntlid": 117, 00:17:13.555 "qid": 0, 00:17:13.555 "state": "enabled", 00:17:13.555 "thread": "nvmf_tgt_poll_group_000", 00:17:13.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:13.555 "listen_address": { 00:17:13.555 "trtype": "TCP", 00:17:13.555 "adrfam": "IPv4", 00:17:13.555 "traddr": "10.0.0.2", 00:17:13.555 "trsvcid": "4420" 00:17:13.555 }, 00:17:13.555 "peer_address": { 00:17:13.555 "trtype": "TCP", 00:17:13.555 "adrfam": "IPv4", 00:17:13.555 "traddr": "10.0.0.1", 00:17:13.555 "trsvcid": "41958" 00:17:13.555 }, 00:17:13.555 "auth": { 00:17:13.555 "state": "completed", 00:17:13.555 "digest": "sha512", 00:17:13.555 "dhgroup": "ffdhe3072" 00:17:13.555 } 00:17:13.555 } 00:17:13.555 ]' 00:17:13.555 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.813 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.813 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.813 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.813 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.813 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.813 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.813 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.071 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:14.071 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:14.638 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.638 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.638 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.638 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.638 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.638 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.638 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.638 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.897 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.156 00:17:15.156 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.156 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.156 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.156 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.156 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.156 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.156 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.156 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.156 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.156 { 00:17:15.156 "cntlid": 119, 00:17:15.156 "qid": 0, 00:17:15.156 "state": "enabled", 00:17:15.156 "thread": "nvmf_tgt_poll_group_000", 00:17:15.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:15.156 "listen_address": { 00:17:15.156 "trtype": "TCP", 00:17:15.156 "adrfam": "IPv4", 00:17:15.156 "traddr": "10.0.0.2", 00:17:15.156 "trsvcid": "4420" 00:17:15.156 }, 00:17:15.156 "peer_address": { 00:17:15.156 "trtype": "TCP", 00:17:15.156 "adrfam": "IPv4", 00:17:15.156 "traddr": "10.0.0.1", 00:17:15.156 "trsvcid": "41986" 00:17:15.156 }, 00:17:15.156 "auth": { 00:17:15.156 "state": "completed", 00:17:15.156 "digest": "sha512", 00:17:15.156 "dhgroup": "ffdhe3072" 00:17:15.156 } 00:17:15.156 } 00:17:15.156 ]' 00:17:15.156 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.414 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.414 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.414 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.414 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.414 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.414 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.415 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.673 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:15.673 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:16.241 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.241 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.241 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.241 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.241 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.241 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.241 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.241 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.241 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.241 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:16.241 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.500 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.500 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:16.500 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.500 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.500 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.500 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.500 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.500 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.500 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.500 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.500 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.758 00:17:16.758 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.758 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.758 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.758 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.758 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.758 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.758 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.017 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.017 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.017 { 00:17:17.017 "cntlid": 121, 00:17:17.017 "qid": 0, 00:17:17.017 "state": "enabled", 00:17:17.017 "thread": "nvmf_tgt_poll_group_000", 00:17:17.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:17.017 "listen_address": { 00:17:17.017 "trtype": "TCP", 00:17:17.017 "adrfam": "IPv4", 00:17:17.017 "traddr": "10.0.0.2", 00:17:17.017 "trsvcid": "4420" 00:17:17.017 }, 00:17:17.017 "peer_address": { 00:17:17.017 "trtype": "TCP", 00:17:17.017 "adrfam": "IPv4", 00:17:17.017 "traddr": "10.0.0.1", 00:17:17.017 "trsvcid": "42004" 00:17:17.017 }, 00:17:17.017 "auth": { 00:17:17.017 "state": "completed", 00:17:17.017 "digest": "sha512", 00:17:17.017 "dhgroup": "ffdhe4096" 00:17:17.017 } 00:17:17.017 } 00:17:17.017 ]' 00:17:17.017 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.017 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.017 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.017 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.017 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.017 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.017 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.017 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.276 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:17.276 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:17.843 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.843 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.843 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.843 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.843 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.843 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.843 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.843 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.102 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.361 00:17:18.361 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.361 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.361 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.620 { 00:17:18.620 "cntlid": 123, 00:17:18.620 "qid": 0, 00:17:18.620 "state": "enabled", 00:17:18.620 "thread": "nvmf_tgt_poll_group_000", 00:17:18.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:18.620 "listen_address": { 00:17:18.620 "trtype": "TCP", 00:17:18.620 "adrfam": "IPv4", 00:17:18.620 "traddr": "10.0.0.2", 00:17:18.620 "trsvcid": "4420" 00:17:18.620 }, 00:17:18.620 "peer_address": { 00:17:18.620 "trtype": "TCP", 00:17:18.620 "adrfam": "IPv4", 00:17:18.620 "traddr": "10.0.0.1", 00:17:18.620 "trsvcid": "42028" 00:17:18.620 }, 00:17:18.620 "auth": { 00:17:18.620 "state": "completed", 00:17:18.620 "digest": "sha512", 00:17:18.620 "dhgroup": "ffdhe4096" 00:17:18.620 } 00:17:18.620 } 00:17:18.620 ]' 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.620 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.879 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:17:18.879 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:17:19.448 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.448 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.448 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.448 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.448 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.448 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.448 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.448 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.707 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.966 00:17:19.966 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.966 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.966 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.225 { 00:17:20.225 "cntlid": 125, 00:17:20.225 "qid": 0, 00:17:20.225 "state": "enabled", 00:17:20.225 "thread": "nvmf_tgt_poll_group_000", 00:17:20.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:20.225 "listen_address": { 00:17:20.225 "trtype": "TCP", 00:17:20.225 "adrfam": "IPv4", 00:17:20.225 "traddr": "10.0.0.2", 00:17:20.225 "trsvcid": "4420" 00:17:20.225 }, 00:17:20.225 "peer_address": { 00:17:20.225 "trtype": "TCP", 00:17:20.225 "adrfam": "IPv4", 00:17:20.225 "traddr": "10.0.0.1", 00:17:20.225 "trsvcid": "42062" 00:17:20.225 }, 00:17:20.225 "auth": { 00:17:20.225 "state": "completed", 00:17:20.225 "digest": "sha512", 00:17:20.225 "dhgroup": "ffdhe4096" 00:17:20.225 } 00:17:20.225 } 00:17:20.225 ]' 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.225 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.484 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:20.484 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:21.051 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.051 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.051 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.051 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.051 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.051 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.051 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.051 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.310 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.569 00:17:21.569 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.569 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.569 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.828 { 00:17:21.828 "cntlid": 127, 00:17:21.828 "qid": 0, 00:17:21.828 "state": "enabled", 00:17:21.828 "thread": "nvmf_tgt_poll_group_000", 00:17:21.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:21.828 "listen_address": { 00:17:21.828 "trtype": "TCP", 00:17:21.828 "adrfam": "IPv4", 00:17:21.828 "traddr": "10.0.0.2", 00:17:21.828 "trsvcid": "4420" 00:17:21.828 }, 00:17:21.828 "peer_address": { 00:17:21.828 "trtype": "TCP", 00:17:21.828 "adrfam": "IPv4", 00:17:21.828 "traddr": "10.0.0.1", 00:17:21.828 "trsvcid": "43876" 00:17:21.828 }, 00:17:21.828 "auth": { 00:17:21.828 "state": "completed", 00:17:21.828 "digest": "sha512", 00:17:21.828 "dhgroup": "ffdhe4096" 00:17:21.828 } 00:17:21.828 } 00:17:21.828 ]' 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.828 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.087 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:22.087 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:22.654 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.654 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.654 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.654 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.654 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.654 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.654 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.654 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.654 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.913 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.171 00:17:23.171 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.171 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.171 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.430 { 00:17:23.430 "cntlid": 129, 00:17:23.430 "qid": 0, 00:17:23.430 "state": "enabled", 00:17:23.430 "thread": "nvmf_tgt_poll_group_000", 00:17:23.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:23.430 "listen_address": { 00:17:23.430 "trtype": "TCP", 00:17:23.430 "adrfam": "IPv4", 00:17:23.430 "traddr": "10.0.0.2", 00:17:23.430 "trsvcid": "4420" 00:17:23.430 }, 00:17:23.430 "peer_address": { 00:17:23.430 "trtype": "TCP", 00:17:23.430 "adrfam": "IPv4", 00:17:23.430 "traddr": "10.0.0.1", 00:17:23.430 "trsvcid": "43894" 00:17:23.430 }, 00:17:23.430 "auth": { 00:17:23.430 "state": "completed", 00:17:23.430 "digest": "sha512", 00:17:23.430 "dhgroup": "ffdhe6144" 00:17:23.430 } 00:17:23.430 } 00:17:23.430 ]' 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.430 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.689 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:23.689 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:24.257 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.257 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.257 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.257 11:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.257 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.257 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.257 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:24.257 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.517 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.776 00:17:25.035 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.035 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.035 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.035 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.035 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.035 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.035 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.035 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.035 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.035 { 00:17:25.035 "cntlid": 131, 00:17:25.035 "qid": 0, 00:17:25.035 "state": "enabled", 00:17:25.035 "thread": "nvmf_tgt_poll_group_000", 00:17:25.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:25.035 "listen_address": { 00:17:25.035 "trtype": "TCP", 00:17:25.035 "adrfam": "IPv4", 00:17:25.035 "traddr": "10.0.0.2", 00:17:25.035 "trsvcid": "4420" 00:17:25.035 }, 00:17:25.035 "peer_address": { 00:17:25.035 "trtype": "TCP", 00:17:25.035 "adrfam": "IPv4", 00:17:25.035 "traddr": "10.0.0.1", 00:17:25.035 "trsvcid": "43920" 00:17:25.035 }, 00:17:25.035 "auth": { 00:17:25.035 "state": "completed", 00:17:25.035 "digest": "sha512", 00:17:25.035 "dhgroup": "ffdhe6144" 00:17:25.035 } 00:17:25.035 } 00:17:25.035 ]' 00:17:25.035 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.293 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.293 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.293 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.293 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.293 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.293 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.293 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.552 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:17:25.552 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.121 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.122 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.122 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.381 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.381 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.381 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.381 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.640 00:17:26.640 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.640 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.640 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.899 { 00:17:26.899 "cntlid": 133, 00:17:26.899 "qid": 0, 00:17:26.899 "state": "enabled", 00:17:26.899 "thread": "nvmf_tgt_poll_group_000", 00:17:26.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:26.899 "listen_address": { 00:17:26.899 "trtype": "TCP", 00:17:26.899 "adrfam": "IPv4", 00:17:26.899 "traddr": "10.0.0.2", 00:17:26.899 "trsvcid": "4420" 00:17:26.899 }, 00:17:26.899 "peer_address": { 00:17:26.899 "trtype": "TCP", 00:17:26.899 "adrfam": "IPv4", 00:17:26.899 "traddr": "10.0.0.1", 00:17:26.899 "trsvcid": "43948" 00:17:26.899 }, 00:17:26.899 "auth": { 00:17:26.899 "state": "completed", 00:17:26.899 "digest": "sha512", 00:17:26.899 "dhgroup": "ffdhe6144" 00:17:26.899 } 00:17:26.899 } 00:17:26.899 ]' 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.899 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.158 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:27.158 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:27.726 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.726 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.726 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.726 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.726 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.726 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.726 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.726 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.984 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.985 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.244 00:17:28.244 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.244 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.244 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.506 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.506 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.506 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.506 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.506 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.506 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.506 { 00:17:28.506 "cntlid": 135, 00:17:28.506 "qid": 0, 00:17:28.506 "state": "enabled", 00:17:28.506 "thread": "nvmf_tgt_poll_group_000", 00:17:28.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:28.506 "listen_address": { 00:17:28.506 "trtype": "TCP", 00:17:28.506 "adrfam": "IPv4", 00:17:28.506 "traddr": "10.0.0.2", 00:17:28.506 "trsvcid": "4420" 00:17:28.506 }, 00:17:28.506 "peer_address": { 00:17:28.506 "trtype": "TCP", 00:17:28.506 "adrfam": "IPv4", 00:17:28.506 "traddr": "10.0.0.1", 00:17:28.506 "trsvcid": "43980" 00:17:28.506 }, 00:17:28.506 "auth": { 00:17:28.506 "state": "completed", 00:17:28.506 "digest": "sha512", 00:17:28.506 "dhgroup": "ffdhe6144" 00:17:28.506 } 00:17:28.506 } 00:17:28.506 ]' 00:17:28.506 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.506 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.506 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.506 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.506 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.765 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.765 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.765 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.765 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:28.765 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:29.335 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.335 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.335 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.335 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.335 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.335 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.335 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.335 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.335 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.594 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.161 00:17:30.161 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.162 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.162 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.420 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.420 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.420 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.420 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.420 { 00:17:30.420 "cntlid": 137, 00:17:30.420 "qid": 0, 00:17:30.420 "state": "enabled", 00:17:30.420 "thread": "nvmf_tgt_poll_group_000", 00:17:30.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:30.421 "listen_address": { 00:17:30.421 "trtype": "TCP", 00:17:30.421 "adrfam": "IPv4", 00:17:30.421 "traddr": "10.0.0.2", 00:17:30.421 "trsvcid": "4420" 00:17:30.421 }, 00:17:30.421 "peer_address": { 00:17:30.421 "trtype": "TCP", 00:17:30.421 "adrfam": "IPv4", 00:17:30.421 "traddr": "10.0.0.1", 00:17:30.421 "trsvcid": "44016" 00:17:30.421 }, 00:17:30.421 "auth": { 00:17:30.421 "state": "completed", 00:17:30.421 "digest": "sha512", 00:17:30.421 "dhgroup": "ffdhe8192" 00:17:30.421 } 00:17:30.421 } 00:17:30.421 ]' 00:17:30.421 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.421 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.421 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.421 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.421 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.421 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.421 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.421 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.680 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:30.680 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:31.247 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.247 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.247 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.247 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.247 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.247 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.247 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.247 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.506 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.072 00:17:32.072 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.072 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.072 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.072 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.072 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.072 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.072 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.072 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.073 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.073 { 00:17:32.073 "cntlid": 139, 00:17:32.073 "qid": 0, 00:17:32.073 "state": "enabled", 00:17:32.073 "thread": "nvmf_tgt_poll_group_000", 00:17:32.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:32.073 "listen_address": { 00:17:32.073 "trtype": "TCP", 00:17:32.073 "adrfam": "IPv4", 00:17:32.073 "traddr": "10.0.0.2", 00:17:32.073 "trsvcid": "4420" 00:17:32.073 }, 00:17:32.073 "peer_address": { 00:17:32.073 "trtype": "TCP", 00:17:32.073 "adrfam": "IPv4", 00:17:32.073 "traddr": "10.0.0.1", 00:17:32.073 "trsvcid": "47446" 00:17:32.073 }, 00:17:32.073 "auth": { 00:17:32.073 "state": "completed", 00:17:32.073 "digest": "sha512", 00:17:32.073 "dhgroup": "ffdhe8192" 00:17:32.073 } 00:17:32.073 } 00:17:32.073 ]' 00:17:32.073 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.331 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.331 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.332 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.332 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.332 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.332 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.332 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.590 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:17:32.590 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: --dhchap-ctrl-secret DHHC-1:02:NDcwZjc4NDYxMTM3OGQxNzg0ODc4Mzc3MDY3NTRhNDNjNDFlZTc1OTlhZmI4MWY5Zji70w==: 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.158 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.725 00:17:33.725 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.725 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.725 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.983 { 00:17:33.983 "cntlid": 141, 00:17:33.983 "qid": 0, 00:17:33.983 "state": "enabled", 00:17:33.983 "thread": "nvmf_tgt_poll_group_000", 00:17:33.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:33.983 "listen_address": { 00:17:33.983 "trtype": "TCP", 00:17:33.983 "adrfam": "IPv4", 00:17:33.983 "traddr": "10.0.0.2", 00:17:33.983 "trsvcid": "4420" 00:17:33.983 }, 00:17:33.983 "peer_address": { 00:17:33.983 "trtype": "TCP", 00:17:33.983 "adrfam": "IPv4", 00:17:33.983 "traddr": "10.0.0.1", 00:17:33.983 "trsvcid": "47472" 00:17:33.983 }, 00:17:33.983 "auth": { 00:17:33.983 "state": "completed", 00:17:33.983 "digest": "sha512", 00:17:33.983 "dhgroup": "ffdhe8192" 00:17:33.983 } 00:17:33.983 } 00:17:33.983 ]' 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.983 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.241 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:34.242 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:01:M2IzMGU5MTkxZjA5M2U4NTJjMmZlYjcwZTRmNzhlMTGBhe/q: 00:17:34.809 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.809 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.809 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.809 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.809 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.809 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.810 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.810 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.069 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.637 00:17:35.637 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.637 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.637 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.907 { 00:17:35.907 "cntlid": 143, 00:17:35.907 "qid": 0, 00:17:35.907 "state": "enabled", 00:17:35.907 "thread": "nvmf_tgt_poll_group_000", 00:17:35.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:35.907 "listen_address": { 00:17:35.907 "trtype": "TCP", 00:17:35.907 "adrfam": "IPv4", 00:17:35.907 "traddr": "10.0.0.2", 00:17:35.907 "trsvcid": "4420" 00:17:35.907 }, 00:17:35.907 "peer_address": { 00:17:35.907 "trtype": "TCP", 00:17:35.907 "adrfam": "IPv4", 00:17:35.907 "traddr": "10.0.0.1", 00:17:35.907 "trsvcid": "47498" 00:17:35.907 }, 00:17:35.907 "auth": { 00:17:35.907 "state": "completed", 00:17:35.907 "digest": "sha512", 00:17:35.907 "dhgroup": "ffdhe8192" 00:17:35.907 } 00:17:35.907 } 00:17:35.907 ]' 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.907 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.169 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:36.169 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:36.737 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.737 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.737 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.737 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.737 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.737 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:36.737 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:36.737 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:36.737 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.737 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.737 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.996 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:36.996 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.996 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.996 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.996 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.996 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.996 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.996 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.996 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.996 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.997 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.997 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.997 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.565 00:17:37.565 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.565 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.565 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.565 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.565 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.565 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.565 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.565 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.565 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.565 { 00:17:37.565 "cntlid": 145, 00:17:37.565 "qid": 0, 00:17:37.565 "state": "enabled", 00:17:37.565 "thread": "nvmf_tgt_poll_group_000", 00:17:37.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:37.565 "listen_address": { 00:17:37.565 "trtype": "TCP", 00:17:37.565 "adrfam": "IPv4", 00:17:37.565 "traddr": "10.0.0.2", 00:17:37.565 "trsvcid": "4420" 00:17:37.565 }, 00:17:37.565 "peer_address": { 00:17:37.565 "trtype": "TCP", 00:17:37.565 "adrfam": "IPv4", 00:17:37.565 "traddr": "10.0.0.1", 00:17:37.565 "trsvcid": "47512" 00:17:37.565 }, 00:17:37.565 "auth": { 00:17:37.565 "state": "completed", 00:17:37.565 "digest": "sha512", 00:17:37.565 "dhgroup": "ffdhe8192" 00:17:37.565 } 00:17:37.565 } 00:17:37.565 ]' 00:17:37.565 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.565 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.825 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.825 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.825 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.825 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.825 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.825 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.084 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:38.084 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjMwNTA1ZmMwNjg0NTNjNjlmNzlmNzRhY2E0N2NmNzVjMjkxYTY1YjZmODRkOTkwVGxomA==: --dhchap-ctrl-secret DHHC-1:03:Yzk5YTdiN2E4ZmM5NWJkMmM0MWE5MWYzYzExMGUwOTFhYWMxY2E4NThjNTUwMmQ5NTY4YTFjNjY2NDdhYzA2MWTCH+c=: 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:38.653 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:38.913 request: 00:17:38.913 { 00:17:38.913 "name": "nvme0", 00:17:38.913 "trtype": "tcp", 00:17:38.913 "traddr": "10.0.0.2", 00:17:38.913 "adrfam": "ipv4", 00:17:38.913 "trsvcid": "4420", 00:17:38.913 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:38.913 "prchk_reftag": false, 00:17:38.913 "prchk_guard": false, 00:17:38.913 "hdgst": false, 00:17:38.913 "ddgst": false, 00:17:38.913 "dhchap_key": "key2", 00:17:38.913 "allow_unrecognized_csi": false, 00:17:38.913 "method": "bdev_nvme_attach_controller", 00:17:38.913 "req_id": 1 00:17:38.913 } 00:17:38.913 Got JSON-RPC error response 00:17:38.913 response: 00:17:38.913 { 00:17:38.913 "code": -5, 00:17:38.913 "message": "Input/output error" 00:17:38.913 } 00:17:38.913 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:38.913 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.913 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.913 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.172 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.172 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.172 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.172 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.173 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.432 request: 00:17:39.432 { 00:17:39.432 "name": "nvme0", 00:17:39.432 "trtype": "tcp", 00:17:39.432 "traddr": "10.0.0.2", 00:17:39.432 "adrfam": "ipv4", 00:17:39.432 "trsvcid": "4420", 00:17:39.432 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:39.432 "prchk_reftag": false, 00:17:39.432 "prchk_guard": false, 00:17:39.432 "hdgst": false, 00:17:39.432 "ddgst": false, 00:17:39.432 "dhchap_key": "key1", 00:17:39.432 "dhchap_ctrlr_key": "ckey2", 00:17:39.432 "allow_unrecognized_csi": false, 00:17:39.432 "method": "bdev_nvme_attach_controller", 00:17:39.432 "req_id": 1 00:17:39.432 } 00:17:39.432 Got JSON-RPC error response 00:17:39.432 response: 00:17:39.432 { 00:17:39.432 "code": -5, 00:17:39.432 "message": "Input/output error" 00:17:39.432 } 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.432 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.433 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.001 request: 00:17:40.001 { 00:17:40.001 "name": "nvme0", 00:17:40.001 "trtype": "tcp", 00:17:40.001 "traddr": "10.0.0.2", 00:17:40.001 "adrfam": "ipv4", 00:17:40.001 "trsvcid": "4420", 00:17:40.001 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:40.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:40.001 "prchk_reftag": false, 00:17:40.001 "prchk_guard": false, 00:17:40.001 "hdgst": false, 00:17:40.001 "ddgst": false, 00:17:40.001 "dhchap_key": "key1", 00:17:40.001 "dhchap_ctrlr_key": "ckey1", 00:17:40.001 "allow_unrecognized_csi": false, 00:17:40.001 "method": "bdev_nvme_attach_controller", 00:17:40.001 "req_id": 1 00:17:40.001 } 00:17:40.001 Got JSON-RPC error response 00:17:40.001 response: 00:17:40.001 { 00:17:40.001 "code": -5, 00:17:40.001 "message": "Input/output error" 00:17:40.001 } 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2241686 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2241686 ']' 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2241686 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2241686 00:17:40.001 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.002 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.002 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2241686' 00:17:40.002 killing process with pid 2241686 00:17:40.002 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2241686 00:17:40.002 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2241686 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2264105 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2264105 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2264105 ']' 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.261 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2264105 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2264105 ']' 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.521 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.780 null0 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XSA 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.bhD ]] 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bhD 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.E4S 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.780 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.CKs ]] 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CKs 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.p59 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.AZ6 ]] 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AZ6 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.06n 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.781 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.719 nvme0n1 00:17:41.719 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.719 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.719 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.719 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.719 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.719 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.719 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.719 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.719 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.719 { 00:17:41.719 "cntlid": 1, 00:17:41.719 "qid": 0, 00:17:41.719 "state": "enabled", 00:17:41.719 "thread": "nvmf_tgt_poll_group_000", 00:17:41.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:41.719 "listen_address": { 00:17:41.719 "trtype": "TCP", 00:17:41.719 "adrfam": "IPv4", 00:17:41.719 "traddr": "10.0.0.2", 00:17:41.719 "trsvcid": "4420" 00:17:41.719 }, 00:17:41.719 "peer_address": { 00:17:41.719 "trtype": "TCP", 00:17:41.719 "adrfam": "IPv4", 00:17:41.719 "traddr": "10.0.0.1", 00:17:41.719 "trsvcid": "56358" 00:17:41.719 }, 00:17:41.719 "auth": { 00:17:41.719 "state": "completed", 00:17:41.719 "digest": "sha512", 00:17:41.719 "dhgroup": "ffdhe8192" 00:17:41.719 } 00:17:41.719 } 00:17:41.719 ]' 00:17:41.719 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.978 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.978 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.978 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.978 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.978 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.978 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.978 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.237 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:42.237 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:42.805 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.805 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.805 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.805 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.805 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.805 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:42.805 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.805 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.805 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.805 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:42.805 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.064 request: 00:17:43.064 { 00:17:43.064 "name": "nvme0", 00:17:43.064 "trtype": "tcp", 00:17:43.064 "traddr": "10.0.0.2", 00:17:43.064 "adrfam": "ipv4", 00:17:43.064 "trsvcid": "4420", 00:17:43.064 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:43.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:43.064 "prchk_reftag": false, 00:17:43.064 "prchk_guard": false, 00:17:43.064 "hdgst": false, 00:17:43.064 "ddgst": false, 00:17:43.064 "dhchap_key": "key3", 00:17:43.064 "allow_unrecognized_csi": false, 00:17:43.064 "method": "bdev_nvme_attach_controller", 00:17:43.064 "req_id": 1 00:17:43.064 } 00:17:43.064 Got JSON-RPC error response 00:17:43.064 response: 00:17:43.064 { 00:17:43.064 "code": -5, 00:17:43.064 "message": "Input/output error" 00:17:43.064 } 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:43.064 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:43.323 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:43.323 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:43.323 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:43.323 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:43.323 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.323 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:43.323 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.323 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.323 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.323 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.583 request: 00:17:43.583 { 00:17:43.583 "name": "nvme0", 00:17:43.583 "trtype": "tcp", 00:17:43.583 "traddr": "10.0.0.2", 00:17:43.583 "adrfam": "ipv4", 00:17:43.583 "trsvcid": "4420", 00:17:43.583 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:43.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:43.583 "prchk_reftag": false, 00:17:43.583 "prchk_guard": false, 00:17:43.583 "hdgst": false, 00:17:43.583 "ddgst": false, 00:17:43.583 "dhchap_key": "key3", 00:17:43.583 "allow_unrecognized_csi": false, 00:17:43.583 "method": "bdev_nvme_attach_controller", 00:17:43.583 "req_id": 1 00:17:43.583 } 00:17:43.583 Got JSON-RPC error response 00:17:43.583 response: 00:17:43.583 { 00:17:43.583 "code": -5, 00:17:43.583 "message": "Input/output error" 00:17:43.583 } 00:17:43.583 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:43.583 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.583 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.583 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.583 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:43.583 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:43.583 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:43.584 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.584 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.584 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.843 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:44.103 request: 00:17:44.103 { 00:17:44.103 "name": "nvme0", 00:17:44.103 "trtype": "tcp", 00:17:44.103 "traddr": "10.0.0.2", 00:17:44.103 "adrfam": "ipv4", 00:17:44.103 "trsvcid": "4420", 00:17:44.103 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:44.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:44.103 "prchk_reftag": false, 00:17:44.103 "prchk_guard": false, 00:17:44.103 "hdgst": false, 00:17:44.103 "ddgst": false, 00:17:44.103 "dhchap_key": "key0", 00:17:44.103 "dhchap_ctrlr_key": "key1", 00:17:44.103 "allow_unrecognized_csi": false, 00:17:44.103 "method": "bdev_nvme_attach_controller", 00:17:44.103 "req_id": 1 00:17:44.103 } 00:17:44.103 Got JSON-RPC error response 00:17:44.103 response: 00:17:44.103 { 00:17:44.103 "code": -5, 00:17:44.103 "message": "Input/output error" 00:17:44.103 } 00:17:44.103 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:44.103 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.103 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.103 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.103 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:44.103 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:44.103 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:44.362 nvme0n1 00:17:44.362 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:44.362 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:44.362 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.621 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.621 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.622 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.881 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:44.881 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.881 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.881 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.881 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:44.881 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:44.881 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:45.449 nvme0n1 00:17:45.449 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:45.449 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:45.449 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.707 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.707 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:45.707 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.707 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.707 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.707 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:45.707 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.707 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:45.966 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.966 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:45.966 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4NjZmMWJlMDRjY2EyNzhiNDMyMzcxMDIzOWI2NDBmMGFhNzRjYWQ3ZjU1NDIzM2NmYTZhMDU1YjY2MGEyM4Xd9D4=: 00:17:46.533 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:46.533 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:46.533 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:46.533 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:46.533 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:46.533 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:46.533 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:46.533 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.533 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.792 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:46.792 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:46.792 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:46.792 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:46.792 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.792 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:46.792 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.792 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:46.792 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:46.792 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:47.360 request: 00:17:47.361 { 00:17:47.361 "name": "nvme0", 00:17:47.361 "trtype": "tcp", 00:17:47.361 "traddr": "10.0.0.2", 00:17:47.361 "adrfam": "ipv4", 00:17:47.361 "trsvcid": "4420", 00:17:47.361 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:47.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:47.361 "prchk_reftag": false, 00:17:47.361 "prchk_guard": false, 00:17:47.361 "hdgst": false, 00:17:47.361 "ddgst": false, 00:17:47.361 "dhchap_key": "key1", 00:17:47.361 "allow_unrecognized_csi": false, 00:17:47.361 "method": "bdev_nvme_attach_controller", 00:17:47.361 "req_id": 1 00:17:47.361 } 00:17:47.361 Got JSON-RPC error response 00:17:47.361 response: 00:17:47.361 { 00:17:47.361 "code": -5, 00:17:47.361 "message": "Input/output error" 00:17:47.361 } 00:17:47.361 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:47.361 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.361 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.361 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.361 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:47.361 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:47.361 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:47.927 nvme0n1 00:17:47.927 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:47.927 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:47.927 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.185 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.185 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.185 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.445 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.445 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.445 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.445 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.445 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:48.445 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:48.445 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:48.704 nvme0n1 00:17:48.704 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:48.704 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:48.704 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: '' 2s 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: ]] 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWU1NGQyMWJhOTNmYWRmY2YyMmIyYTQzNjJmZWExYTZclGpk: 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:48.963 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:50.972 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:50.972 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:50.972 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:50.972 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:50.972 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:50.972 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:50.972 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:50.972 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:50.972 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.972 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: 2s 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: ]] 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDZhNDU1M2UxNDBiM2FjODAzYmJmNGRlZmUzNTAyMTg1MzI0YmUxYzBmNjA1NGUxcVhRCA==: 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:51.230 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:53.136 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:54.073 nvme0n1 00:17:54.073 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.073 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.073 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.073 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.073 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.073 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.332 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:54.332 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:54.332 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.591 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.591 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.591 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.591 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.591 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.591 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:54.591 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:54.850 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:54.850 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:54.850 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:55.108 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:55.674 request: 00:17:55.674 { 00:17:55.674 "name": "nvme0", 00:17:55.674 "dhchap_key": "key1", 00:17:55.674 "dhchap_ctrlr_key": "key3", 00:17:55.674 "method": "bdev_nvme_set_keys", 00:17:55.674 "req_id": 1 00:17:55.674 } 00:17:55.674 Got JSON-RPC error response 00:17:55.674 response: 00:17:55.674 { 00:17:55.674 "code": -13, 00:17:55.674 "message": "Permission denied" 00:17:55.674 } 00:17:55.674 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:55.674 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.674 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.674 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.674 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:55.674 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:55.674 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.674 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:55.674 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:57.050 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:57.050 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:57.050 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.050 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:57.050 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.051 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.051 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.051 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.051 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.051 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.051 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.618 nvme0n1 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.618 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:58.186 request: 00:17:58.186 { 00:17:58.186 "name": "nvme0", 00:17:58.186 "dhchap_key": "key2", 00:17:58.186 "dhchap_ctrlr_key": "key0", 00:17:58.186 "method": "bdev_nvme_set_keys", 00:17:58.186 "req_id": 1 00:17:58.186 } 00:17:58.186 Got JSON-RPC error response 00:17:58.186 response: 00:17:58.186 { 00:17:58.186 "code": -13, 00:17:58.186 "message": "Permission denied" 00:17:58.186 } 00:17:58.186 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:58.186 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.186 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.186 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.186 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:58.186 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:58.186 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.446 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:58.446 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:59.382 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:59.382 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:59.382 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2241753 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2241753 ']' 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2241753 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2241753 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2241753' 00:17:59.642 killing process with pid 2241753 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2241753 00:17:59.642 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2241753 00:17:59.901 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:59.901 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.901 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:59.901 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.901 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:59.901 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.901 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.901 rmmod nvme_tcp 00:17:59.901 rmmod nvme_fabrics 00:18:00.161 rmmod nvme_keyring 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2264105 ']' 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2264105 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2264105 ']' 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2264105 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2264105 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2264105' 00:18:00.161 killing process with pid 2264105 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2264105 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2264105 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.161 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.700 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:02.700 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.XSA /tmp/spdk.key-sha256.E4S /tmp/spdk.key-sha384.p59 /tmp/spdk.key-sha512.06n /tmp/spdk.key-sha512.bhD /tmp/spdk.key-sha384.CKs /tmp/spdk.key-sha256.AZ6 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:02.700 00:18:02.700 real 2m33.761s 00:18:02.700 user 5m54.651s 00:18:02.700 sys 0m24.263s 00:18:02.700 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.700 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.700 ************************************ 00:18:02.700 END TEST nvmf_auth_target 00:18:02.700 ************************************ 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.700 ************************************ 00:18:02.700 START TEST nvmf_bdevio_no_huge 00:18:02.700 ************************************ 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:02.700 * Looking for test storage... 00:18:02.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.700 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.701 --rc genhtml_branch_coverage=1 00:18:02.701 --rc genhtml_function_coverage=1 00:18:02.701 --rc genhtml_legend=1 00:18:02.701 --rc geninfo_all_blocks=1 00:18:02.701 --rc geninfo_unexecuted_blocks=1 00:18:02.701 00:18:02.701 ' 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.701 --rc genhtml_branch_coverage=1 00:18:02.701 --rc genhtml_function_coverage=1 00:18:02.701 --rc genhtml_legend=1 00:18:02.701 --rc geninfo_all_blocks=1 00:18:02.701 --rc geninfo_unexecuted_blocks=1 00:18:02.701 00:18:02.701 ' 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.701 --rc genhtml_branch_coverage=1 00:18:02.701 --rc genhtml_function_coverage=1 00:18:02.701 --rc genhtml_legend=1 00:18:02.701 --rc geninfo_all_blocks=1 00:18:02.701 --rc geninfo_unexecuted_blocks=1 00:18:02.701 00:18:02.701 ' 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.701 --rc genhtml_branch_coverage=1 00:18:02.701 --rc genhtml_function_coverage=1 00:18:02.701 --rc genhtml_legend=1 00:18:02.701 --rc geninfo_all_blocks=1 00:18:02.701 --rc geninfo_unexecuted_blocks=1 00:18:02.701 00:18:02.701 ' 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:02.701 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:02.702 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.274 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:09.274 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:09.274 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:09.274 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:09.274 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:09.274 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:09.274 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:09.274 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:09.274 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:09.275 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:09.275 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:09.275 Found net devices under 0000:86:00.0: cvl_0_0 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:09.275 Found net devices under 0000:86:00.1: cvl_0_1 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.275 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:09.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:18:09.275 00:18:09.275 --- 10.0.0.2 ping statistics --- 00:18:09.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.275 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:18:09.275 00:18:09.275 --- 10.0.0.1 ping statistics --- 00:18:09.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.275 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.275 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2271005 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2271005 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2271005 ']' 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.276 [2024-11-19 11:29:22.284562] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:09.276 [2024-11-19 11:29:22.284616] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:09.276 [2024-11-19 11:29:22.354446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.276 [2024-11-19 11:29:22.402285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.276 [2024-11-19 11:29:22.402318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.276 [2024-11-19 11:29:22.402325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.276 [2024-11-19 11:29:22.402331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.276 [2024-11-19 11:29:22.402336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.276 [2024-11-19 11:29:22.403594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:09.276 [2024-11-19 11:29:22.403701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:09.276 [2024-11-19 11:29:22.403806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.276 [2024-11-19 11:29:22.403807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.276 [2024-11-19 11:29:22.548390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.276 Malloc0 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.276 [2024-11-19 11:29:22.592673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:09.276 { 00:18:09.276 "params": { 00:18:09.276 "name": "Nvme$subsystem", 00:18:09.276 "trtype": "$TEST_TRANSPORT", 00:18:09.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.276 "adrfam": "ipv4", 00:18:09.276 "trsvcid": "$NVMF_PORT", 00:18:09.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.276 "hdgst": ${hdgst:-false}, 00:18:09.276 "ddgst": ${ddgst:-false} 00:18:09.276 }, 00:18:09.276 "method": "bdev_nvme_attach_controller" 00:18:09.276 } 00:18:09.276 EOF 00:18:09.276 )") 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:09.276 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:09.276 "params": { 00:18:09.276 "name": "Nvme1", 00:18:09.276 "trtype": "tcp", 00:18:09.276 "traddr": "10.0.0.2", 00:18:09.276 "adrfam": "ipv4", 00:18:09.276 "trsvcid": "4420", 00:18:09.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.276 "hdgst": false, 00:18:09.276 "ddgst": false 00:18:09.276 }, 00:18:09.276 "method": "bdev_nvme_attach_controller" 00:18:09.276 }' 00:18:09.276 [2024-11-19 11:29:22.644934] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:09.276 [2024-11-19 11:29:22.644991] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2271058 ] 00:18:09.276 [2024-11-19 11:29:22.726025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:09.276 [2024-11-19 11:29:22.775123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.276 [2024-11-19 11:29:22.775233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.276 [2024-11-19 11:29:22.775233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.276 I/O targets: 00:18:09.276 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:09.276 00:18:09.276 00:18:09.276 CUnit - A unit testing framework for C - Version 2.1-3 00:18:09.276 http://cunit.sourceforge.net/ 00:18:09.276 00:18:09.276 00:18:09.276 Suite: bdevio tests on: Nvme1n1 00:18:09.276 Test: blockdev write read block ...passed 00:18:09.535 Test: blockdev write zeroes read block ...passed 00:18:09.535 Test: blockdev write zeroes read no split ...passed 00:18:09.535 Test: blockdev write zeroes read split ...passed 00:18:09.535 Test: blockdev write zeroes read split partial ...passed 00:18:09.535 Test: blockdev reset ...[2024-11-19 11:29:23.145458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:09.535 [2024-11-19 11:29:23.145523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131c920 (9): Bad file descriptor 00:18:09.535 [2024-11-19 11:29:23.160636] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:09.535 passed 00:18:09.535 Test: blockdev write read 8 blocks ...passed 00:18:09.535 Test: blockdev write read size > 128k ...passed 00:18:09.535 Test: blockdev write read invalid size ...passed 00:18:09.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:09.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:09.535 Test: blockdev write read max offset ...passed 00:18:09.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:09.535 Test: blockdev writev readv 8 blocks ...passed 00:18:09.535 Test: blockdev writev readv 30 x 1block ...passed 00:18:09.794 Test: blockdev writev readv block ...passed 00:18:09.794 Test: blockdev writev readv size > 128k ...passed 00:18:09.794 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:09.794 Test: blockdev comparev and writev ...[2024-11-19 11:29:23.330806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.794 [2024-11-19 11:29:23.330835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.794 [2024-11-19 11:29:23.330849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.795 [2024-11-19 11:29:23.330858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.795 [2024-11-19 11:29:23.331103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.795 [2024-11-19 11:29:23.331116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:09.795 [2024-11-19 11:29:23.331134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.795 [2024-11-19 11:29:23.331141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:09.795 [2024-11-19 11:29:23.331386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.795 [2024-11-19 11:29:23.331397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:09.795 [2024-11-19 11:29:23.331409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.795 [2024-11-19 11:29:23.331416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:09.795 [2024-11-19 11:29:23.331654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.795 [2024-11-19 11:29:23.331667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:09.795 [2024-11-19 11:29:23.331678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:09.795 [2024-11-19 11:29:23.331686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:09.795 passed 00:18:09.795 Test: blockdev nvme passthru rw ...passed 00:18:09.795 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:29:23.413309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.795 [2024-11-19 11:29:23.413328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:09.795 [2024-11-19 11:29:23.413433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.795 [2024-11-19 11:29:23.413443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:09.795 [2024-11-19 11:29:23.413543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.795 [2024-11-19 11:29:23.413553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:09.795 [2024-11-19 11:29:23.413659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:09.795 [2024-11-19 11:29:23.413669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:09.795 passed 00:18:09.795 Test: blockdev nvme admin passthru ...passed 00:18:09.795 Test: blockdev copy ...passed 00:18:09.795 00:18:09.795 Run Summary: Type Total Ran Passed Failed Inactive 00:18:09.795 suites 1 1 n/a 0 0 00:18:09.795 tests 23 23 23 0 0 00:18:09.795 asserts 152 152 152 0 n/a 00:18:09.795 00:18:09.795 Elapsed time = 0.980 seconds 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:10.055 rmmod nvme_tcp 00:18:10.055 rmmod nvme_fabrics 00:18:10.055 rmmod nvme_keyring 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2271005 ']' 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2271005 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2271005 ']' 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2271005 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.055 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2271005 00:18:10.314 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:10.314 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:10.314 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2271005' 00:18:10.314 killing process with pid 2271005 00:18:10.314 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2271005 00:18:10.314 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2271005 00:18:10.573 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:10.573 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:10.574 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:10.574 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:10.574 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:10.574 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:10.574 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:10.574 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:10.574 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:10.574 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.574 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.574 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.479 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:12.479 00:18:12.479 real 0m10.167s 00:18:12.479 user 0m10.486s 00:18:12.479 sys 0m5.321s 00:18:12.479 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.479 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.479 ************************************ 00:18:12.479 END TEST nvmf_bdevio_no_huge 00:18:12.479 ************************************ 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:12.739 ************************************ 00:18:12.739 START TEST nvmf_tls 00:18:12.739 ************************************ 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:12.739 * Looking for test storage... 00:18:12.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:12.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.739 --rc genhtml_branch_coverage=1 00:18:12.739 --rc genhtml_function_coverage=1 00:18:12.739 --rc genhtml_legend=1 00:18:12.739 --rc geninfo_all_blocks=1 00:18:12.739 --rc geninfo_unexecuted_blocks=1 00:18:12.739 00:18:12.739 ' 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:12.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.739 --rc genhtml_branch_coverage=1 00:18:12.739 --rc genhtml_function_coverage=1 00:18:12.739 --rc genhtml_legend=1 00:18:12.739 --rc geninfo_all_blocks=1 00:18:12.739 --rc geninfo_unexecuted_blocks=1 00:18:12.739 00:18:12.739 ' 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:12.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.739 --rc genhtml_branch_coverage=1 00:18:12.739 --rc genhtml_function_coverage=1 00:18:12.739 --rc genhtml_legend=1 00:18:12.739 --rc geninfo_all_blocks=1 00:18:12.739 --rc geninfo_unexecuted_blocks=1 00:18:12.739 00:18:12.739 ' 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:12.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.739 --rc genhtml_branch_coverage=1 00:18:12.739 --rc genhtml_function_coverage=1 00:18:12.739 --rc genhtml_legend=1 00:18:12.739 --rc geninfo_all_blocks=1 00:18:12.739 --rc geninfo_unexecuted_blocks=1 00:18:12.739 00:18:12.739 ' 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.739 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:12.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.740 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.999 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:12.999 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:12.999 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:12.999 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:19.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:19.571 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:19.571 Found net devices under 0000:86:00.0: cvl_0_0 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:19.571 Found net devices under 0000:86:00.1: cvl_0_1 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:19.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:18:19.571 00:18:19.571 --- 10.0.0.2 ping statistics --- 00:18:19.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.571 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:18:19.571 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:18:19.571 00:18:19.572 --- 10.0.0.1 ping statistics --- 00:18:19.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.572 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2274821 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2274821 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2274821 ']' 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.572 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.572 [2024-11-19 11:29:32.543269] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:19.572 [2024-11-19 11:29:32.543320] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.572 [2024-11-19 11:29:32.625665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.572 [2024-11-19 11:29:32.665173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.572 [2024-11-19 11:29:32.665208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.572 [2024-11-19 11:29:32.665215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.572 [2024-11-19 11:29:32.665220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.572 [2024-11-19 11:29:32.665225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.572 [2024-11-19 11:29:32.665788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.831 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.831 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:19.831 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.831 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.831 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.831 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.831 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:19.831 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:19.831 true 00:18:19.831 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.831 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:20.091 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:20.091 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:20.091 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:20.349 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.349 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:20.607 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:20.607 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:20.607 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:20.607 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.607 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:20.865 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:20.865 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:20.865 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.865 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:21.124 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:21.124 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:21.124 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:21.383 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:21.383 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:21.383 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:21.383 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:21.383 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:21.641 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:21.641 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.RRmUhifBqp 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.GBKbVaHGmB 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.RRmUhifBqp 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.GBKbVaHGmB 00:18:21.901 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:22.160 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:22.420 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.RRmUhifBqp 00:18:22.420 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.RRmUhifBqp 00:18:22.420 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:22.420 [2024-11-19 11:29:36.177782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.420 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:22.679 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:22.938 [2024-11-19 11:29:36.550751] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.938 [2024-11-19 11:29:36.550955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.938 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:23.196 malloc0 00:18:23.196 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:23.196 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.RRmUhifBqp 00:18:23.456 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:23.715 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RRmUhifBqp 00:18:33.686 Initializing NVMe Controllers 00:18:33.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:33.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:33.686 Initialization complete. Launching workers. 00:18:33.687 ======================================================== 00:18:33.687 Latency(us) 00:18:33.687 Device Information : IOPS MiB/s Average min max 00:18:33.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16381.07 63.99 3907.07 838.81 5788.53 00:18:33.687 ======================================================== 00:18:33.687 Total : 16381.07 63.99 3907.07 838.81 5788.53 00:18:33.687 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RRmUhifBqp 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RRmUhifBqp 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2277199 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2277199 /var/tmp/bdevperf.sock 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2277199 ']' 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.687 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.687 [2024-11-19 11:29:47.439970] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:33.687 [2024-11-19 11:29:47.440017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277199 ] 00:18:33.945 [2024-11-19 11:29:47.515850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.945 [2024-11-19 11:29:47.558846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.945 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.945 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:33.945 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RRmUhifBqp 00:18:34.204 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:34.463 [2024-11-19 11:29:48.005565] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.463 TLSTESTn1 00:18:34.463 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:34.463 Running I/O for 10 seconds... 00:18:36.775 5295.00 IOPS, 20.68 MiB/s [2024-11-19T10:29:51.495Z] 5430.50 IOPS, 21.21 MiB/s [2024-11-19T10:29:52.430Z] 5417.00 IOPS, 21.16 MiB/s [2024-11-19T10:29:53.369Z] 5442.50 IOPS, 21.26 MiB/s [2024-11-19T10:29:54.311Z] 5461.20 IOPS, 21.33 MiB/s [2024-11-19T10:29:55.245Z] 5469.50 IOPS, 21.37 MiB/s [2024-11-19T10:29:56.618Z] 5453.57 IOPS, 21.30 MiB/s [2024-11-19T10:29:57.551Z] 5439.38 IOPS, 21.25 MiB/s [2024-11-19T10:29:58.484Z] 5437.22 IOPS, 21.24 MiB/s [2024-11-19T10:29:58.484Z] 5423.70 IOPS, 21.19 MiB/s 00:18:44.703 Latency(us) 00:18:44.703 [2024-11-19T10:29:58.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.703 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:44.703 Verification LBA range: start 0x0 length 0x2000 00:18:44.703 TLSTESTn1 : 10.01 5428.60 21.21 0.00 0.00 23544.12 5556.31 28151.99 00:18:44.703 [2024-11-19T10:29:58.484Z] =================================================================================================================== 00:18:44.703 [2024-11-19T10:29:58.484Z] Total : 5428.60 21.21 0.00 0.00 23544.12 5556.31 28151.99 00:18:44.703 { 00:18:44.703 "results": [ 00:18:44.703 { 00:18:44.703 "job": "TLSTESTn1", 00:18:44.703 "core_mask": "0x4", 00:18:44.703 "workload": "verify", 00:18:44.703 "status": "finished", 00:18:44.703 "verify_range": { 00:18:44.703 "start": 0, 00:18:44.703 "length": 8192 00:18:44.703 }, 00:18:44.703 "queue_depth": 128, 00:18:44.703 "io_size": 4096, 00:18:44.703 "runtime": 10.01418, 00:18:44.703 "iops": 5428.602242020815, 00:18:44.703 "mibps": 21.205477507893807, 00:18:44.703 "io_failed": 0, 00:18:44.703 "io_timeout": 0, 00:18:44.703 "avg_latency_us": 23544.124935358046, 00:18:44.703 "min_latency_us": 5556.313043478261, 00:18:44.703 "max_latency_us": 28151.98608695652 00:18:44.703 } 00:18:44.703 ], 00:18:44.703 "core_count": 1 00:18:44.703 } 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2277199 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2277199 ']' 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2277199 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2277199 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2277199' 00:18:44.704 killing process with pid 2277199 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2277199 00:18:44.704 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.704 00:18:44.704 Latency(us) 00:18:44.704 [2024-11-19T10:29:58.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.704 [2024-11-19T10:29:58.485Z] =================================================================================================================== 00:18:44.704 [2024-11-19T10:29:58.485Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2277199 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GBKbVaHGmB 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GBKbVaHGmB 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GBKbVaHGmB 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GBKbVaHGmB 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2279012 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2279012 /var/tmp/bdevperf.sock 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2279012 ']' 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.704 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.962 [2024-11-19 11:29:58.513300] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:44.962 [2024-11-19 11:29:58.513346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279012 ] 00:18:44.962 [2024-11-19 11:29:58.588050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.962 [2024-11-19 11:29:58.630145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.962 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.962 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.962 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GBKbVaHGmB 00:18:45.220 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:45.478 [2024-11-19 11:29:59.072526] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.478 [2024-11-19 11:29:59.079057] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:45.478 [2024-11-19 11:29:59.079801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8f170 (107): Transport endpoint is not connected 00:18:45.478 [2024-11-19 11:29:59.080794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8f170 (9): Bad file descriptor 00:18:45.478 [2024-11-19 11:29:59.081796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:45.478 [2024-11-19 11:29:59.081807] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:45.478 [2024-11-19 11:29:59.081815] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:45.478 [2024-11-19 11:29:59.081825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:45.478 request: 00:18:45.478 { 00:18:45.478 "name": "TLSTEST", 00:18:45.478 "trtype": "tcp", 00:18:45.478 "traddr": "10.0.0.2", 00:18:45.478 "adrfam": "ipv4", 00:18:45.478 "trsvcid": "4420", 00:18:45.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.478 "prchk_reftag": false, 00:18:45.478 "prchk_guard": false, 00:18:45.478 "hdgst": false, 00:18:45.478 "ddgst": false, 00:18:45.478 "psk": "key0", 00:18:45.478 "allow_unrecognized_csi": false, 00:18:45.478 "method": "bdev_nvme_attach_controller", 00:18:45.478 "req_id": 1 00:18:45.478 } 00:18:45.478 Got JSON-RPC error response 00:18:45.478 response: 00:18:45.478 { 00:18:45.478 "code": -5, 00:18:45.478 "message": "Input/output error" 00:18:45.478 } 00:18:45.478 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2279012 00:18:45.478 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2279012 ']' 00:18:45.478 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2279012 00:18:45.478 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.478 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.478 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279012 00:18:45.478 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:45.478 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:45.478 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279012' 00:18:45.478 killing process with pid 2279012 00:18:45.478 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2279012 00:18:45.478 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.478 00:18:45.478 Latency(us) 00:18:45.478 [2024-11-19T10:29:59.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.478 [2024-11-19T10:29:59.259Z] =================================================================================================================== 00:18:45.478 [2024-11-19T10:29:59.259Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.478 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2279012 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RRmUhifBqp 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RRmUhifBqp 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RRmUhifBqp 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RRmUhifBqp 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2279244 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2279244 /var/tmp/bdevperf.sock 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2279244 ']' 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.736 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.736 [2024-11-19 11:29:59.352850] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:45.736 [2024-11-19 11:29:59.352898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279244 ] 00:18:45.736 [2024-11-19 11:29:59.426456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.736 [2024-11-19 11:29:59.467730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.995 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.995 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.995 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RRmUhifBqp 00:18:45.995 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:46.254 [2024-11-19 11:29:59.943254] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.254 [2024-11-19 11:29:59.951160] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:46.254 [2024-11-19 11:29:59.951185] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:46.254 [2024-11-19 11:29:59.951210] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:46.254 [2024-11-19 11:29:59.951669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1338170 (107): Transport endpoint is not connected 00:18:46.254 [2024-11-19 11:29:59.952663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1338170 (9): Bad file descriptor 00:18:46.254 [2024-11-19 11:29:59.953665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:46.254 [2024-11-19 11:29:59.953675] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:46.254 [2024-11-19 11:29:59.953683] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:46.254 [2024-11-19 11:29:59.953694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:46.254 request: 00:18:46.254 { 00:18:46.254 "name": "TLSTEST", 00:18:46.254 "trtype": "tcp", 00:18:46.254 "traddr": "10.0.0.2", 00:18:46.254 "adrfam": "ipv4", 00:18:46.254 "trsvcid": "4420", 00:18:46.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.254 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:46.254 "prchk_reftag": false, 00:18:46.254 "prchk_guard": false, 00:18:46.254 "hdgst": false, 00:18:46.254 "ddgst": false, 00:18:46.254 "psk": "key0", 00:18:46.254 "allow_unrecognized_csi": false, 00:18:46.254 "method": "bdev_nvme_attach_controller", 00:18:46.254 "req_id": 1 00:18:46.254 } 00:18:46.254 Got JSON-RPC error response 00:18:46.254 response: 00:18:46.254 { 00:18:46.254 "code": -5, 00:18:46.254 "message": "Input/output error" 00:18:46.254 } 00:18:46.254 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2279244 00:18:46.254 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2279244 ']' 00:18:46.254 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2279244 00:18:46.254 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.254 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.254 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279244 00:18:46.254 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:46.254 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:46.254 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279244' 00:18:46.254 killing process with pid 2279244 00:18:46.254 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2279244 00:18:46.254 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.254 00:18:46.254 Latency(us) 00:18:46.254 [2024-11-19T10:30:00.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.254 [2024-11-19T10:30:00.035Z] =================================================================================================================== 00:18:46.254 [2024-11-19T10:30:00.036Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.255 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2279244 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RRmUhifBqp 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RRmUhifBqp 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RRmUhifBqp 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RRmUhifBqp 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2279314 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2279314 /var/tmp/bdevperf.sock 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2279314 ']' 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.514 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.514 [2024-11-19 11:30:00.222393] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:46.514 [2024-11-19 11:30:00.222443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279314 ] 00:18:46.514 [2024-11-19 11:30:00.282926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.772 [2024-11-19 11:30:00.324605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.772 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.772 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.772 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RRmUhifBqp 00:18:47.031 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.031 [2024-11-19 11:30:00.795679] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.031 [2024-11-19 11:30:00.803354] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:47.031 [2024-11-19 11:30:00.803377] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:47.031 [2024-11-19 11:30:00.803405] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:47.031 [2024-11-19 11:30:00.804059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124c170 (107): Transport endpoint is not connected 00:18:47.031 [2024-11-19 11:30:00.805051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124c170 (9): Bad file descriptor 00:18:47.031 [2024-11-19 11:30:00.806054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:47.031 [2024-11-19 11:30:00.806077] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:47.031 [2024-11-19 11:30:00.806086] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:47.031 [2024-11-19 11:30:00.806099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:47.031 request: 00:18:47.031 { 00:18:47.031 "name": "TLSTEST", 00:18:47.031 "trtype": "tcp", 00:18:47.031 "traddr": "10.0.0.2", 00:18:47.031 "adrfam": "ipv4", 00:18:47.031 "trsvcid": "4420", 00:18:47.031 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:47.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.031 "prchk_reftag": false, 00:18:47.031 "prchk_guard": false, 00:18:47.031 "hdgst": false, 00:18:47.031 "ddgst": false, 00:18:47.031 "psk": "key0", 00:18:47.031 "allow_unrecognized_csi": false, 00:18:47.031 "method": "bdev_nvme_attach_controller", 00:18:47.031 "req_id": 1 00:18:47.031 } 00:18:47.031 Got JSON-RPC error response 00:18:47.031 response: 00:18:47.031 { 00:18:47.031 "code": -5, 00:18:47.031 "message": "Input/output error" 00:18:47.031 } 00:18:47.361 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2279314 00:18:47.361 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2279314 ']' 00:18:47.361 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2279314 00:18:47.361 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:47.361 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.361 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279314 00:18:47.361 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:47.361 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:47.361 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279314' 00:18:47.361 killing process with pid 2279314 00:18:47.361 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2279314 00:18:47.361 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.361 00:18:47.361 Latency(us) 00:18:47.361 [2024-11-19T10:30:01.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.361 [2024-11-19T10:30:01.142Z] =================================================================================================================== 00:18:47.361 [2024-11-19T10:30:01.142Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.361 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2279314 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2279566 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2279566 /var/tmp/bdevperf.sock 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2279566 ']' 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.361 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.361 [2024-11-19 11:30:01.089493] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:47.361 [2024-11-19 11:30:01.089542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279566 ] 00:18:47.693 [2024-11-19 11:30:01.164660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.693 [2024-11-19 11:30:01.206877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.693 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.693 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.693 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:47.975 [2024-11-19 11:30:01.469121] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:47.975 [2024-11-19 11:30:01.469152] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:47.975 request: 00:18:47.975 { 00:18:47.975 "name": "key0", 00:18:47.975 "path": "", 00:18:47.975 "method": "keyring_file_add_key", 00:18:47.975 "req_id": 1 00:18:47.975 } 00:18:47.975 Got JSON-RPC error response 00:18:47.976 response: 00:18:47.976 { 00:18:47.976 "code": -1, 00:18:47.976 "message": "Operation not permitted" 00:18:47.976 } 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.976 [2024-11-19 11:30:01.661722] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.976 [2024-11-19 11:30:01.661757] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:47.976 request: 00:18:47.976 { 00:18:47.976 "name": "TLSTEST", 00:18:47.976 "trtype": "tcp", 00:18:47.976 "traddr": "10.0.0.2", 00:18:47.976 "adrfam": "ipv4", 00:18:47.976 "trsvcid": "4420", 00:18:47.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.976 "prchk_reftag": false, 00:18:47.976 "prchk_guard": false, 00:18:47.976 "hdgst": false, 00:18:47.976 "ddgst": false, 00:18:47.976 "psk": "key0", 00:18:47.976 "allow_unrecognized_csi": false, 00:18:47.976 "method": "bdev_nvme_attach_controller", 00:18:47.976 "req_id": 1 00:18:47.976 } 00:18:47.976 Got JSON-RPC error response 00:18:47.976 response: 00:18:47.976 { 00:18:47.976 "code": -126, 00:18:47.976 "message": "Required key not available" 00:18:47.976 } 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2279566 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2279566 ']' 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2279566 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279566 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279566' 00:18:47.976 killing process with pid 2279566 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2279566 00:18:47.976 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.976 00:18:47.976 Latency(us) 00:18:47.976 [2024-11-19T10:30:01.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.976 [2024-11-19T10:30:01.757Z] =================================================================================================================== 00:18:47.976 [2024-11-19T10:30:01.757Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.976 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2279566 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2274821 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2274821 ']' 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2274821 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2274821 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2274821' 00:18:48.234 killing process with pid 2274821 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2274821 00:18:48.234 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2274821 00:18:48.493 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.66Q0kCvEFX 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.66Q0kCvEFX 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2279872 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2279872 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2279872 ']' 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.494 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.494 [2024-11-19 11:30:02.212516] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:48.494 [2024-11-19 11:30:02.212561] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.753 [2024-11-19 11:30:02.291472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.753 [2024-11-19 11:30:02.331875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.753 [2024-11-19 11:30:02.331911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.753 [2024-11-19 11:30:02.331919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.753 [2024-11-19 11:30:02.331925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.753 [2024-11-19 11:30:02.331933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.753 [2024-11-19 11:30:02.332504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.753 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.753 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:48.753 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.753 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.753 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.753 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.753 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.66Q0kCvEFX 00:18:48.753 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.66Q0kCvEFX 00:18:48.753 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:49.012 [2024-11-19 11:30:02.640105] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.012 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:49.271 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:49.530 [2024-11-19 11:30:03.053157] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.530 [2024-11-19 11:30:03.053341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.530 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.530 malloc0 00:18:49.530 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:49.788 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.66Q0kCvEFX 00:18:50.047 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.66Q0kCvEFX 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.66Q0kCvEFX 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2280127 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2280127 /var/tmp/bdevperf.sock 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2280127 ']' 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.305 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.305 [2024-11-19 11:30:03.913762] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:50.305 [2024-11-19 11:30:03.913809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2280127 ] 00:18:50.305 [2024-11-19 11:30:03.990469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.305 [2024-11-19 11:30:04.031544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.563 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.563 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.563 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.66Q0kCvEFX 00:18:50.563 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:50.822 [2024-11-19 11:30:04.514574] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.822 TLSTESTn1 00:18:51.080 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:51.080 Running I/O for 10 seconds... 00:18:52.945 5451.00 IOPS, 21.29 MiB/s [2024-11-19T10:30:08.102Z] 5501.00 IOPS, 21.49 MiB/s [2024-11-19T10:30:09.036Z] 5520.00 IOPS, 21.56 MiB/s [2024-11-19T10:30:09.970Z] 5505.25 IOPS, 21.50 MiB/s [2024-11-19T10:30:10.905Z] 5509.00 IOPS, 21.52 MiB/s [2024-11-19T10:30:11.840Z] 5488.33 IOPS, 21.44 MiB/s [2024-11-19T10:30:12.775Z] 5468.14 IOPS, 21.36 MiB/s [2024-11-19T10:30:14.149Z] 5467.38 IOPS, 21.36 MiB/s [2024-11-19T10:30:15.083Z] 5472.33 IOPS, 21.38 MiB/s [2024-11-19T10:30:15.083Z] 5475.40 IOPS, 21.39 MiB/s 00:19:01.302 Latency(us) 00:19:01.302 [2024-11-19T10:30:15.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.302 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:01.302 Verification LBA range: start 0x0 length 0x2000 00:19:01.302 TLSTESTn1 : 10.01 5481.00 21.41 0.00 0.00 23319.25 5812.76 27810.06 00:19:01.302 [2024-11-19T10:30:15.083Z] =================================================================================================================== 00:19:01.302 [2024-11-19T10:30:15.083Z] Total : 5481.00 21.41 0.00 0.00 23319.25 5812.76 27810.06 00:19:01.302 { 00:19:01.302 "results": [ 00:19:01.302 { 00:19:01.302 "job": "TLSTESTn1", 00:19:01.302 "core_mask": "0x4", 00:19:01.302 "workload": "verify", 00:19:01.302 "status": "finished", 00:19:01.302 "verify_range": { 00:19:01.302 "start": 0, 00:19:01.302 "length": 8192 00:19:01.302 }, 00:19:01.302 "queue_depth": 128, 00:19:01.302 "io_size": 4096, 00:19:01.302 "runtime": 10.012945, 00:19:01.302 "iops": 5481.004839235609, 00:19:01.302 "mibps": 21.4101751532641, 00:19:01.302 "io_failed": 0, 00:19:01.302 "io_timeout": 0, 00:19:01.302 "avg_latency_us": 23319.251046477635, 00:19:01.302 "min_latency_us": 5812.758260869565, 00:19:01.302 "max_latency_us": 27810.059130434784 00:19:01.302 } 00:19:01.302 ], 00:19:01.302 "core_count": 1 00:19:01.302 } 00:19:01.302 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:01.302 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2280127 00:19:01.302 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2280127 ']' 00:19:01.302 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2280127 00:19:01.302 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:01.302 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.302 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2280127 00:19:01.302 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:01.302 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:01.302 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2280127' 00:19:01.302 killing process with pid 2280127 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2280127 00:19:01.303 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.303 00:19:01.303 Latency(us) 00:19:01.303 [2024-11-19T10:30:15.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.303 [2024-11-19T10:30:15.084Z] =================================================================================================================== 00:19:01.303 [2024-11-19T10:30:15.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2280127 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.66Q0kCvEFX 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.66Q0kCvEFX 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.66Q0kCvEFX 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.66Q0kCvEFX 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.66Q0kCvEFX 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2282352 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2282352 /var/tmp/bdevperf.sock 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2282352 ']' 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.303 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.303 [2024-11-19 11:30:15.027576] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:01.303 [2024-11-19 11:30:15.027627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2282352 ] 00:19:01.561 [2024-11-19 11:30:15.102204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.561 [2024-11-19 11:30:15.139216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.561 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.561 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.561 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.66Q0kCvEFX 00:19:01.819 [2024-11-19 11:30:15.409025] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.66Q0kCvEFX': 0100666 00:19:01.819 [2024-11-19 11:30:15.409058] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:01.819 request: 00:19:01.819 { 00:19:01.819 "name": "key0", 00:19:01.819 "path": "/tmp/tmp.66Q0kCvEFX", 00:19:01.819 "method": "keyring_file_add_key", 00:19:01.819 "req_id": 1 00:19:01.819 } 00:19:01.819 Got JSON-RPC error response 00:19:01.819 response: 00:19:01.819 { 00:19:01.819 "code": -1, 00:19:01.819 "message": "Operation not permitted" 00:19:01.819 } 00:19:01.819 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.078 [2024-11-19 11:30:15.605632] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.078 [2024-11-19 11:30:15.605667] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:02.078 request: 00:19:02.078 { 00:19:02.078 "name": "TLSTEST", 00:19:02.078 "trtype": "tcp", 00:19:02.078 "traddr": "10.0.0.2", 00:19:02.078 "adrfam": "ipv4", 00:19:02.078 "trsvcid": "4420", 00:19:02.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.078 "prchk_reftag": false, 00:19:02.078 "prchk_guard": false, 00:19:02.078 "hdgst": false, 00:19:02.078 "ddgst": false, 00:19:02.078 "psk": "key0", 00:19:02.078 "allow_unrecognized_csi": false, 00:19:02.078 "method": "bdev_nvme_attach_controller", 00:19:02.078 "req_id": 1 00:19:02.078 } 00:19:02.078 Got JSON-RPC error response 00:19:02.078 response: 00:19:02.078 { 00:19:02.078 "code": -126, 00:19:02.078 "message": "Required key not available" 00:19:02.078 } 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2282352 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2282352 ']' 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2282352 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2282352 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2282352' 00:19:02.078 killing process with pid 2282352 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2282352 00:19:02.078 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.078 00:19:02.078 Latency(us) 00:19:02.078 [2024-11-19T10:30:15.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.078 [2024-11-19T10:30:15.859Z] =================================================================================================================== 00:19:02.078 [2024-11-19T10:30:15.859Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2282352 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2279872 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2279872 ']' 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2279872 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.078 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279872 00:19:02.337 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:02.337 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:02.337 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279872' 00:19:02.337 killing process with pid 2279872 00:19:02.337 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2279872 00:19:02.337 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2279872 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2282589 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2282589 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2282589 ']' 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.337 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.337 [2024-11-19 11:30:16.104391] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:02.337 [2024-11-19 11:30:16.104438] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.596 [2024-11-19 11:30:16.184074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.596 [2024-11-19 11:30:16.219677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.596 [2024-11-19 11:30:16.219713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.596 [2024-11-19 11:30:16.219720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.596 [2024-11-19 11:30:16.219726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.596 [2024-11-19 11:30:16.219731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.596 [2024-11-19 11:30:16.220330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.66Q0kCvEFX 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.66Q0kCvEFX 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.66Q0kCvEFX 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.66Q0kCvEFX 00:19:02.596 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:02.854 [2024-11-19 11:30:16.540022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.854 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:03.113 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:03.372 [2024-11-19 11:30:16.904969] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.372 [2024-11-19 11:30:16.905164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.372 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.372 malloc0 00:19:03.372 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.630 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.66Q0kCvEFX 00:19:03.889 [2024-11-19 11:30:17.458679] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.66Q0kCvEFX': 0100666 00:19:03.889 [2024-11-19 11:30:17.458715] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:03.889 request: 00:19:03.889 { 00:19:03.889 "name": "key0", 00:19:03.889 "path": "/tmp/tmp.66Q0kCvEFX", 00:19:03.889 "method": "keyring_file_add_key", 00:19:03.889 "req_id": 1 00:19:03.889 } 00:19:03.889 Got JSON-RPC error response 00:19:03.889 response: 00:19:03.889 { 00:19:03.889 "code": -1, 00:19:03.889 "message": "Operation not permitted" 00:19:03.889 } 00:19:03.889 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:03.889 [2024-11-19 11:30:17.647219] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:03.889 [2024-11-19 11:30:17.647256] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:03.889 request: 00:19:03.889 { 00:19:03.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.889 "host": "nqn.2016-06.io.spdk:host1", 00:19:03.889 "psk": "key0", 00:19:03.889 "method": "nvmf_subsystem_add_host", 00:19:03.889 "req_id": 1 00:19:03.889 } 00:19:03.889 Got JSON-RPC error response 00:19:03.889 response: 00:19:03.889 { 00:19:03.889 "code": -32603, 00:19:03.889 "message": "Internal error" 00:19:03.889 } 00:19:03.889 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:03.889 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.889 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.889 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.889 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2282589 00:19:03.889 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2282589 ']' 00:19:03.889 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2282589 00:19:03.889 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:04.148 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.148 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2282589 00:19:04.148 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:04.148 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:04.148 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2282589' 00:19:04.148 killing process with pid 2282589 00:19:04.148 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2282589 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2282589 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.66Q0kCvEFX 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2282860 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2282860 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2282860 ']' 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.149 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.408 [2024-11-19 11:30:17.933852] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:04.408 [2024-11-19 11:30:17.933900] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.408 [2024-11-19 11:30:18.009729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.408 [2024-11-19 11:30:18.046028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.408 [2024-11-19 11:30:18.046064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.408 [2024-11-19 11:30:18.046073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.408 [2024-11-19 11:30:18.046079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.408 [2024-11-19 11:30:18.046085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.408 [2024-11-19 11:30:18.046662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.408 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.408 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.408 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.408 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.408 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.408 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.408 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.66Q0kCvEFX 00:19:04.408 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.66Q0kCvEFX 00:19:04.408 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:04.666 [2024-11-19 11:30:18.361657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.666 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:04.925 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:05.187 [2024-11-19 11:30:18.746665] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.187 [2024-11-19 11:30:18.746866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.187 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.187 malloc0 00:19:05.187 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:05.446 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.66Q0kCvEFX 00:19:05.705 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:05.964 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.964 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2283120 00:19:05.964 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:05.964 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2283120 /var/tmp/bdevperf.sock 00:19:05.964 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2283120 ']' 00:19:05.964 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.964 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.964 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.964 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.964 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.964 [2024-11-19 11:30:19.584381] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:05.964 [2024-11-19 11:30:19.584429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283120 ] 00:19:05.964 [2024-11-19 11:30:19.658260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.964 [2024-11-19 11:30:19.700902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.223 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.223 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:06.223 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.66Q0kCvEFX 00:19:06.223 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:06.481 [2024-11-19 11:30:20.181528] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.481 TLSTESTn1 00:19:06.740 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:06.999 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:06.999 "subsystems": [ 00:19:06.999 { 00:19:06.999 "subsystem": "keyring", 00:19:06.999 "config": [ 00:19:06.999 { 00:19:06.999 "method": "keyring_file_add_key", 00:19:06.999 "params": { 00:19:06.999 "name": "key0", 00:19:06.999 "path": "/tmp/tmp.66Q0kCvEFX" 00:19:06.999 } 00:19:06.999 } 00:19:06.999 ] 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "subsystem": "iobuf", 00:19:06.999 "config": [ 00:19:06.999 { 00:19:06.999 "method": "iobuf_set_options", 00:19:06.999 "params": { 00:19:06.999 "small_pool_count": 8192, 00:19:06.999 "large_pool_count": 1024, 00:19:06.999 "small_bufsize": 8192, 00:19:06.999 "large_bufsize": 135168, 00:19:06.999 "enable_numa": false 00:19:06.999 } 00:19:06.999 } 00:19:06.999 ] 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "subsystem": "sock", 00:19:06.999 "config": [ 00:19:06.999 { 00:19:06.999 "method": "sock_set_default_impl", 00:19:06.999 "params": { 00:19:06.999 "impl_name": "posix" 00:19:06.999 } 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "method": "sock_impl_set_options", 00:19:06.999 "params": { 00:19:06.999 "impl_name": "ssl", 00:19:06.999 "recv_buf_size": 4096, 00:19:06.999 "send_buf_size": 4096, 00:19:06.999 "enable_recv_pipe": true, 00:19:06.999 "enable_quickack": false, 00:19:06.999 "enable_placement_id": 0, 00:19:06.999 "enable_zerocopy_send_server": true, 00:19:06.999 "enable_zerocopy_send_client": false, 00:19:06.999 "zerocopy_threshold": 0, 00:19:06.999 "tls_version": 0, 00:19:06.999 "enable_ktls": false 00:19:06.999 } 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "method": "sock_impl_set_options", 00:19:06.999 "params": { 00:19:06.999 "impl_name": "posix", 00:19:06.999 "recv_buf_size": 2097152, 00:19:06.999 "send_buf_size": 2097152, 00:19:06.999 "enable_recv_pipe": true, 00:19:06.999 "enable_quickack": false, 00:19:06.999 "enable_placement_id": 0, 00:19:06.999 "enable_zerocopy_send_server": true, 00:19:06.999 "enable_zerocopy_send_client": false, 00:19:06.999 "zerocopy_threshold": 0, 00:19:06.999 "tls_version": 0, 00:19:06.999 "enable_ktls": false 00:19:06.999 } 00:19:06.999 } 00:19:06.999 ] 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "subsystem": "vmd", 00:19:06.999 "config": [] 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "subsystem": "accel", 00:19:06.999 "config": [ 00:19:06.999 { 00:19:06.999 "method": "accel_set_options", 00:19:06.999 "params": { 00:19:06.999 "small_cache_size": 128, 00:19:06.999 "large_cache_size": 16, 00:19:06.999 "task_count": 2048, 00:19:06.999 "sequence_count": 2048, 00:19:06.999 "buf_count": 2048 00:19:06.999 } 00:19:06.999 } 00:19:06.999 ] 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "subsystem": "bdev", 00:19:06.999 "config": [ 00:19:06.999 { 00:19:06.999 "method": "bdev_set_options", 00:19:06.999 "params": { 00:19:06.999 "bdev_io_pool_size": 65535, 00:19:06.999 "bdev_io_cache_size": 256, 00:19:06.999 "bdev_auto_examine": true, 00:19:06.999 "iobuf_small_cache_size": 128, 00:19:06.999 "iobuf_large_cache_size": 16 00:19:06.999 } 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "method": "bdev_raid_set_options", 00:19:06.999 "params": { 00:19:06.999 "process_window_size_kb": 1024, 00:19:06.999 "process_max_bandwidth_mb_sec": 0 00:19:06.999 } 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "method": "bdev_iscsi_set_options", 00:19:06.999 "params": { 00:19:06.999 "timeout_sec": 30 00:19:06.999 } 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "method": "bdev_nvme_set_options", 00:19:06.999 "params": { 00:19:06.999 "action_on_timeout": "none", 00:19:06.999 "timeout_us": 0, 00:19:06.999 "timeout_admin_us": 0, 00:19:06.999 "keep_alive_timeout_ms": 10000, 00:19:06.999 "arbitration_burst": 0, 00:19:06.999 "low_priority_weight": 0, 00:19:06.999 "medium_priority_weight": 0, 00:19:06.999 "high_priority_weight": 0, 00:19:06.999 "nvme_adminq_poll_period_us": 10000, 00:19:06.999 "nvme_ioq_poll_period_us": 0, 00:19:06.999 "io_queue_requests": 0, 00:19:06.999 "delay_cmd_submit": true, 00:19:06.999 "transport_retry_count": 4, 00:19:06.999 "bdev_retry_count": 3, 00:19:06.999 "transport_ack_timeout": 0, 00:19:06.999 "ctrlr_loss_timeout_sec": 0, 00:19:06.999 "reconnect_delay_sec": 0, 00:19:06.999 "fast_io_fail_timeout_sec": 0, 00:19:06.999 "disable_auto_failback": false, 00:19:06.999 "generate_uuids": false, 00:19:06.999 "transport_tos": 0, 00:19:06.999 "nvme_error_stat": false, 00:19:06.999 "rdma_srq_size": 0, 00:19:06.999 "io_path_stat": false, 00:19:06.999 "allow_accel_sequence": false, 00:19:06.999 "rdma_max_cq_size": 0, 00:19:06.999 "rdma_cm_event_timeout_ms": 0, 00:19:06.999 "dhchap_digests": [ 00:19:06.999 "sha256", 00:19:06.999 "sha384", 00:19:06.999 "sha512" 00:19:06.999 ], 00:19:06.999 "dhchap_dhgroups": [ 00:19:06.999 "null", 00:19:06.999 "ffdhe2048", 00:19:06.999 "ffdhe3072", 00:19:06.999 "ffdhe4096", 00:19:06.999 "ffdhe6144", 00:19:06.999 "ffdhe8192" 00:19:06.999 ] 00:19:06.999 } 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "method": "bdev_nvme_set_hotplug", 00:19:06.999 "params": { 00:19:06.999 "period_us": 100000, 00:19:06.999 "enable": false 00:19:06.999 } 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "method": "bdev_malloc_create", 00:19:06.999 "params": { 00:19:06.999 "name": "malloc0", 00:19:06.999 "num_blocks": 8192, 00:19:06.999 "block_size": 4096, 00:19:06.999 "physical_block_size": 4096, 00:19:06.999 "uuid": "e7e5685d-88c8-405d-ad3d-6bbbd654797e", 00:19:06.999 "optimal_io_boundary": 0, 00:19:06.999 "md_size": 0, 00:19:06.999 "dif_type": 0, 00:19:06.999 "dif_is_head_of_md": false, 00:19:06.999 "dif_pi_format": 0 00:19:06.999 } 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "method": "bdev_wait_for_examine" 00:19:06.999 } 00:19:06.999 ] 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "subsystem": "nbd", 00:19:06.999 "config": [] 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "subsystem": "scheduler", 00:19:06.999 "config": [ 00:19:06.999 { 00:19:06.999 "method": "framework_set_scheduler", 00:19:06.999 "params": { 00:19:06.999 "name": "static" 00:19:06.999 } 00:19:06.999 } 00:19:06.999 ] 00:19:06.999 }, 00:19:06.999 { 00:19:06.999 "subsystem": "nvmf", 00:19:06.999 "config": [ 00:19:06.999 { 00:19:06.999 "method": "nvmf_set_config", 00:19:07.000 "params": { 00:19:07.000 "discovery_filter": "match_any", 00:19:07.000 "admin_cmd_passthru": { 00:19:07.000 "identify_ctrlr": false 00:19:07.000 }, 00:19:07.000 "dhchap_digests": [ 00:19:07.000 "sha256", 00:19:07.000 "sha384", 00:19:07.000 "sha512" 00:19:07.000 ], 00:19:07.000 "dhchap_dhgroups": [ 00:19:07.000 "null", 00:19:07.000 "ffdhe2048", 00:19:07.000 "ffdhe3072", 00:19:07.000 "ffdhe4096", 00:19:07.000 "ffdhe6144", 00:19:07.000 "ffdhe8192" 00:19:07.000 ] 00:19:07.000 } 00:19:07.000 }, 00:19:07.000 { 00:19:07.000 "method": "nvmf_set_max_subsystems", 00:19:07.000 "params": { 00:19:07.000 "max_subsystems": 1024 00:19:07.000 } 00:19:07.000 }, 00:19:07.000 { 00:19:07.000 "method": "nvmf_set_crdt", 00:19:07.000 "params": { 00:19:07.000 "crdt1": 0, 00:19:07.000 "crdt2": 0, 00:19:07.000 "crdt3": 0 00:19:07.000 } 00:19:07.000 }, 00:19:07.000 { 00:19:07.000 "method": "nvmf_create_transport", 00:19:07.000 "params": { 00:19:07.000 "trtype": "TCP", 00:19:07.000 "max_queue_depth": 128, 00:19:07.000 "max_io_qpairs_per_ctrlr": 127, 00:19:07.000 "in_capsule_data_size": 4096, 00:19:07.000 "max_io_size": 131072, 00:19:07.000 "io_unit_size": 131072, 00:19:07.000 "max_aq_depth": 128, 00:19:07.000 "num_shared_buffers": 511, 00:19:07.000 "buf_cache_size": 4294967295, 00:19:07.000 "dif_insert_or_strip": false, 00:19:07.000 "zcopy": false, 00:19:07.000 "c2h_success": false, 00:19:07.000 "sock_priority": 0, 00:19:07.000 "abort_timeout_sec": 1, 00:19:07.000 "ack_timeout": 0, 00:19:07.000 "data_wr_pool_size": 0 00:19:07.000 } 00:19:07.000 }, 00:19:07.000 { 00:19:07.000 "method": "nvmf_create_subsystem", 00:19:07.000 "params": { 00:19:07.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.000 "allow_any_host": false, 00:19:07.000 "serial_number": "SPDK00000000000001", 00:19:07.000 "model_number": "SPDK bdev Controller", 00:19:07.000 "max_namespaces": 10, 00:19:07.000 "min_cntlid": 1, 00:19:07.000 "max_cntlid": 65519, 00:19:07.000 "ana_reporting": false 00:19:07.000 } 00:19:07.000 }, 00:19:07.000 { 00:19:07.000 "method": "nvmf_subsystem_add_host", 00:19:07.000 "params": { 00:19:07.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.000 "host": "nqn.2016-06.io.spdk:host1", 00:19:07.000 "psk": "key0" 00:19:07.000 } 00:19:07.000 }, 00:19:07.000 { 00:19:07.000 "method": "nvmf_subsystem_add_ns", 00:19:07.000 "params": { 00:19:07.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.000 "namespace": { 00:19:07.000 "nsid": 1, 00:19:07.000 "bdev_name": "malloc0", 00:19:07.000 "nguid": "E7E5685D88C8405DAD3D6BBBD654797E", 00:19:07.000 "uuid": "e7e5685d-88c8-405d-ad3d-6bbbd654797e", 00:19:07.000 "no_auto_visible": false 00:19:07.000 } 00:19:07.000 } 00:19:07.000 }, 00:19:07.000 { 00:19:07.000 "method": "nvmf_subsystem_add_listener", 00:19:07.000 "params": { 00:19:07.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.000 "listen_address": { 00:19:07.000 "trtype": "TCP", 00:19:07.000 "adrfam": "IPv4", 00:19:07.000 "traddr": "10.0.0.2", 00:19:07.000 "trsvcid": "4420" 00:19:07.000 }, 00:19:07.000 "secure_channel": true 00:19:07.000 } 00:19:07.000 } 00:19:07.000 ] 00:19:07.000 } 00:19:07.000 ] 00:19:07.000 }' 00:19:07.000 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:07.259 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:07.259 "subsystems": [ 00:19:07.259 { 00:19:07.259 "subsystem": "keyring", 00:19:07.259 "config": [ 00:19:07.259 { 00:19:07.259 "method": "keyring_file_add_key", 00:19:07.259 "params": { 00:19:07.259 "name": "key0", 00:19:07.259 "path": "/tmp/tmp.66Q0kCvEFX" 00:19:07.259 } 00:19:07.259 } 00:19:07.259 ] 00:19:07.259 }, 00:19:07.259 { 00:19:07.259 "subsystem": "iobuf", 00:19:07.259 "config": [ 00:19:07.259 { 00:19:07.259 "method": "iobuf_set_options", 00:19:07.259 "params": { 00:19:07.259 "small_pool_count": 8192, 00:19:07.259 "large_pool_count": 1024, 00:19:07.259 "small_bufsize": 8192, 00:19:07.259 "large_bufsize": 135168, 00:19:07.259 "enable_numa": false 00:19:07.259 } 00:19:07.259 } 00:19:07.259 ] 00:19:07.259 }, 00:19:07.259 { 00:19:07.259 "subsystem": "sock", 00:19:07.259 "config": [ 00:19:07.259 { 00:19:07.259 "method": "sock_set_default_impl", 00:19:07.259 "params": { 00:19:07.259 "impl_name": "posix" 00:19:07.259 } 00:19:07.259 }, 00:19:07.259 { 00:19:07.259 "method": "sock_impl_set_options", 00:19:07.259 "params": { 00:19:07.259 "impl_name": "ssl", 00:19:07.259 "recv_buf_size": 4096, 00:19:07.259 "send_buf_size": 4096, 00:19:07.259 "enable_recv_pipe": true, 00:19:07.259 "enable_quickack": false, 00:19:07.259 "enable_placement_id": 0, 00:19:07.259 "enable_zerocopy_send_server": true, 00:19:07.259 "enable_zerocopy_send_client": false, 00:19:07.259 "zerocopy_threshold": 0, 00:19:07.259 "tls_version": 0, 00:19:07.259 "enable_ktls": false 00:19:07.259 } 00:19:07.259 }, 00:19:07.259 { 00:19:07.259 "method": "sock_impl_set_options", 00:19:07.259 "params": { 00:19:07.259 "impl_name": "posix", 00:19:07.259 "recv_buf_size": 2097152, 00:19:07.259 "send_buf_size": 2097152, 00:19:07.259 "enable_recv_pipe": true, 00:19:07.260 "enable_quickack": false, 00:19:07.260 "enable_placement_id": 0, 00:19:07.260 "enable_zerocopy_send_server": true, 00:19:07.260 "enable_zerocopy_send_client": false, 00:19:07.260 "zerocopy_threshold": 0, 00:19:07.260 "tls_version": 0, 00:19:07.260 "enable_ktls": false 00:19:07.260 } 00:19:07.260 } 00:19:07.260 ] 00:19:07.260 }, 00:19:07.260 { 00:19:07.260 "subsystem": "vmd", 00:19:07.260 "config": [] 00:19:07.260 }, 00:19:07.260 { 00:19:07.260 "subsystem": "accel", 00:19:07.260 "config": [ 00:19:07.260 { 00:19:07.260 "method": "accel_set_options", 00:19:07.260 "params": { 00:19:07.260 "small_cache_size": 128, 00:19:07.260 "large_cache_size": 16, 00:19:07.260 "task_count": 2048, 00:19:07.260 "sequence_count": 2048, 00:19:07.260 "buf_count": 2048 00:19:07.260 } 00:19:07.260 } 00:19:07.260 ] 00:19:07.260 }, 00:19:07.260 { 00:19:07.260 "subsystem": "bdev", 00:19:07.260 "config": [ 00:19:07.260 { 00:19:07.260 "method": "bdev_set_options", 00:19:07.260 "params": { 00:19:07.260 "bdev_io_pool_size": 65535, 00:19:07.260 "bdev_io_cache_size": 256, 00:19:07.260 "bdev_auto_examine": true, 00:19:07.260 "iobuf_small_cache_size": 128, 00:19:07.260 "iobuf_large_cache_size": 16 00:19:07.260 } 00:19:07.260 }, 00:19:07.260 { 00:19:07.260 "method": "bdev_raid_set_options", 00:19:07.260 "params": { 00:19:07.260 "process_window_size_kb": 1024, 00:19:07.260 "process_max_bandwidth_mb_sec": 0 00:19:07.260 } 00:19:07.260 }, 00:19:07.260 { 00:19:07.260 "method": "bdev_iscsi_set_options", 00:19:07.260 "params": { 00:19:07.260 "timeout_sec": 30 00:19:07.260 } 00:19:07.260 }, 00:19:07.260 { 00:19:07.260 "method": "bdev_nvme_set_options", 00:19:07.260 "params": { 00:19:07.260 "action_on_timeout": "none", 00:19:07.260 "timeout_us": 0, 00:19:07.260 "timeout_admin_us": 0, 00:19:07.260 "keep_alive_timeout_ms": 10000, 00:19:07.260 "arbitration_burst": 0, 00:19:07.260 "low_priority_weight": 0, 00:19:07.260 "medium_priority_weight": 0, 00:19:07.260 "high_priority_weight": 0, 00:19:07.260 "nvme_adminq_poll_period_us": 10000, 00:19:07.260 "nvme_ioq_poll_period_us": 0, 00:19:07.260 "io_queue_requests": 512, 00:19:07.260 "delay_cmd_submit": true, 00:19:07.260 "transport_retry_count": 4, 00:19:07.260 "bdev_retry_count": 3, 00:19:07.260 "transport_ack_timeout": 0, 00:19:07.260 "ctrlr_loss_timeout_sec": 0, 00:19:07.260 "reconnect_delay_sec": 0, 00:19:07.260 "fast_io_fail_timeout_sec": 0, 00:19:07.260 "disable_auto_failback": false, 00:19:07.260 "generate_uuids": false, 00:19:07.260 "transport_tos": 0, 00:19:07.260 "nvme_error_stat": false, 00:19:07.260 "rdma_srq_size": 0, 00:19:07.260 "io_path_stat": false, 00:19:07.260 "allow_accel_sequence": false, 00:19:07.260 "rdma_max_cq_size": 0, 00:19:07.260 "rdma_cm_event_timeout_ms": 0, 00:19:07.260 "dhchap_digests": [ 00:19:07.260 "sha256", 00:19:07.260 "sha384", 00:19:07.260 "sha512" 00:19:07.260 ], 00:19:07.260 "dhchap_dhgroups": [ 00:19:07.260 "null", 00:19:07.260 "ffdhe2048", 00:19:07.260 "ffdhe3072", 00:19:07.260 "ffdhe4096", 00:19:07.260 "ffdhe6144", 00:19:07.260 "ffdhe8192" 00:19:07.260 ] 00:19:07.260 } 00:19:07.260 }, 00:19:07.260 { 00:19:07.260 "method": "bdev_nvme_attach_controller", 00:19:07.260 "params": { 00:19:07.260 "name": "TLSTEST", 00:19:07.260 "trtype": "TCP", 00:19:07.260 "adrfam": "IPv4", 00:19:07.260 "traddr": "10.0.0.2", 00:19:07.260 "trsvcid": "4420", 00:19:07.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.260 "prchk_reftag": false, 00:19:07.260 "prchk_guard": false, 00:19:07.260 "ctrlr_loss_timeout_sec": 0, 00:19:07.260 "reconnect_delay_sec": 0, 00:19:07.260 "fast_io_fail_timeout_sec": 0, 00:19:07.260 "psk": "key0", 00:19:07.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.260 "hdgst": false, 00:19:07.260 "ddgst": false, 00:19:07.260 "multipath": "multipath" 00:19:07.260 } 00:19:07.260 }, 00:19:07.260 { 00:19:07.260 "method": "bdev_nvme_set_hotplug", 00:19:07.260 "params": { 00:19:07.260 "period_us": 100000, 00:19:07.260 "enable": false 00:19:07.260 } 00:19:07.260 }, 00:19:07.260 { 00:19:07.260 "method": "bdev_wait_for_examine" 00:19:07.260 } 00:19:07.260 ] 00:19:07.260 }, 00:19:07.260 { 00:19:07.260 "subsystem": "nbd", 00:19:07.260 "config": [] 00:19:07.260 } 00:19:07.260 ] 00:19:07.260 }' 00:19:07.260 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2283120 00:19:07.260 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2283120 ']' 00:19:07.260 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2283120 00:19:07.260 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:07.260 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.260 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2283120 00:19:07.260 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:07.260 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:07.260 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2283120' 00:19:07.260 killing process with pid 2283120 00:19:07.260 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2283120 00:19:07.260 Received shutdown signal, test time was about 10.000000 seconds 00:19:07.260 00:19:07.260 Latency(us) 00:19:07.260 [2024-11-19T10:30:21.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.260 [2024-11-19T10:30:21.041Z] =================================================================================================================== 00:19:07.260 [2024-11-19T10:30:21.041Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:07.260 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2283120 00:19:07.260 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2282860 00:19:07.260 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2282860 ']' 00:19:07.260 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2282860 00:19:07.260 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:07.260 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.521 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2282860 00:19:07.521 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:07.521 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:07.521 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2282860' 00:19:07.521 killing process with pid 2282860 00:19:07.521 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2282860 00:19:07.521 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2282860 00:19:07.521 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:07.521 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.521 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.521 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:07.521 "subsystems": [ 00:19:07.521 { 00:19:07.521 "subsystem": "keyring", 00:19:07.521 "config": [ 00:19:07.521 { 00:19:07.521 "method": "keyring_file_add_key", 00:19:07.521 "params": { 00:19:07.521 "name": "key0", 00:19:07.521 "path": "/tmp/tmp.66Q0kCvEFX" 00:19:07.521 } 00:19:07.521 } 00:19:07.521 ] 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "subsystem": "iobuf", 00:19:07.521 "config": [ 00:19:07.521 { 00:19:07.521 "method": "iobuf_set_options", 00:19:07.521 "params": { 00:19:07.521 "small_pool_count": 8192, 00:19:07.521 "large_pool_count": 1024, 00:19:07.521 "small_bufsize": 8192, 00:19:07.521 "large_bufsize": 135168, 00:19:07.521 "enable_numa": false 00:19:07.521 } 00:19:07.521 } 00:19:07.521 ] 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "subsystem": "sock", 00:19:07.521 "config": [ 00:19:07.521 { 00:19:07.521 "method": "sock_set_default_impl", 00:19:07.521 "params": { 00:19:07.521 "impl_name": "posix" 00:19:07.521 } 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "method": "sock_impl_set_options", 00:19:07.521 "params": { 00:19:07.521 "impl_name": "ssl", 00:19:07.521 "recv_buf_size": 4096, 00:19:07.521 "send_buf_size": 4096, 00:19:07.521 "enable_recv_pipe": true, 00:19:07.521 "enable_quickack": false, 00:19:07.521 "enable_placement_id": 0, 00:19:07.521 "enable_zerocopy_send_server": true, 00:19:07.521 "enable_zerocopy_send_client": false, 00:19:07.521 "zerocopy_threshold": 0, 00:19:07.521 "tls_version": 0, 00:19:07.521 "enable_ktls": false 00:19:07.521 } 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "method": "sock_impl_set_options", 00:19:07.521 "params": { 00:19:07.521 "impl_name": "posix", 00:19:07.521 "recv_buf_size": 2097152, 00:19:07.521 "send_buf_size": 2097152, 00:19:07.521 "enable_recv_pipe": true, 00:19:07.521 "enable_quickack": false, 00:19:07.521 "enable_placement_id": 0, 00:19:07.521 "enable_zerocopy_send_server": true, 00:19:07.521 "enable_zerocopy_send_client": false, 00:19:07.521 "zerocopy_threshold": 0, 00:19:07.521 "tls_version": 0, 00:19:07.521 "enable_ktls": false 00:19:07.521 } 00:19:07.521 } 00:19:07.521 ] 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "subsystem": "vmd", 00:19:07.521 "config": [] 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "subsystem": "accel", 00:19:07.521 "config": [ 00:19:07.521 { 00:19:07.521 "method": "accel_set_options", 00:19:07.521 "params": { 00:19:07.521 "small_cache_size": 128, 00:19:07.521 "large_cache_size": 16, 00:19:07.521 "task_count": 2048, 00:19:07.521 "sequence_count": 2048, 00:19:07.521 "buf_count": 2048 00:19:07.521 } 00:19:07.521 } 00:19:07.521 ] 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "subsystem": "bdev", 00:19:07.521 "config": [ 00:19:07.521 { 00:19:07.521 "method": "bdev_set_options", 00:19:07.521 "params": { 00:19:07.521 "bdev_io_pool_size": 65535, 00:19:07.521 "bdev_io_cache_size": 256, 00:19:07.521 "bdev_auto_examine": true, 00:19:07.521 "iobuf_small_cache_size": 128, 00:19:07.521 "iobuf_large_cache_size": 16 00:19:07.521 } 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "method": "bdev_raid_set_options", 00:19:07.521 "params": { 00:19:07.521 "process_window_size_kb": 1024, 00:19:07.521 "process_max_bandwidth_mb_sec": 0 00:19:07.521 } 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "method": "bdev_iscsi_set_options", 00:19:07.521 "params": { 00:19:07.521 "timeout_sec": 30 00:19:07.521 } 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "method": "bdev_nvme_set_options", 00:19:07.521 "params": { 00:19:07.521 "action_on_timeout": "none", 00:19:07.521 "timeout_us": 0, 00:19:07.521 "timeout_admin_us": 0, 00:19:07.521 "keep_alive_timeout_ms": 10000, 00:19:07.521 "arbitration_burst": 0, 00:19:07.521 "low_priority_weight": 0, 00:19:07.521 "medium_priority_weight": 0, 00:19:07.521 "high_priority_weight": 0, 00:19:07.521 "nvme_adminq_poll_period_us": 10000, 00:19:07.521 "nvme_ioq_poll_period_us": 0, 00:19:07.521 "io_queue_requests": 0, 00:19:07.521 "delay_cmd_submit": true, 00:19:07.521 "transport_retry_count": 4, 00:19:07.521 "bdev_retry_count": 3, 00:19:07.521 "transport_ack_timeout": 0, 00:19:07.521 "ctrlr_loss_timeout_sec": 0, 00:19:07.521 "reconnect_delay_sec": 0, 00:19:07.521 "fast_io_fail_timeout_sec": 0, 00:19:07.521 "disable_auto_failback": false, 00:19:07.521 "generate_uuids": false, 00:19:07.521 "transport_tos": 0, 00:19:07.521 "nvme_error_stat": false, 00:19:07.521 "rdma_srq_size": 0, 00:19:07.521 "io_path_stat": false, 00:19:07.521 "allow_accel_sequence": false, 00:19:07.521 "rdma_max_cq_size": 0, 00:19:07.521 "rdma_cm_event_timeout_ms": 0, 00:19:07.521 "dhchap_digests": [ 00:19:07.521 "sha256", 00:19:07.521 "sha384", 00:19:07.521 "sha512" 00:19:07.521 ], 00:19:07.521 "dhchap_dhgroups": [ 00:19:07.521 "null", 00:19:07.521 "ffdhe2048", 00:19:07.521 "ffdhe3072", 00:19:07.521 "ffdhe4096", 00:19:07.521 "ffdhe6144", 00:19:07.521 "ffdhe8192" 00:19:07.521 ] 00:19:07.521 } 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "method": "bdev_nvme_set_hotplug", 00:19:07.521 "params": { 00:19:07.521 "period_us": 100000, 00:19:07.521 "enable": false 00:19:07.521 } 00:19:07.521 }, 00:19:07.521 { 00:19:07.521 "method": "bdev_malloc_create", 00:19:07.521 "params": { 00:19:07.521 "name": "malloc0", 00:19:07.521 "num_blocks": 8192, 00:19:07.521 "block_size": 4096, 00:19:07.522 "physical_block_size": 4096, 00:19:07.522 "uuid": "e7e5685d-88c8-405d-ad3d-6bbbd654797e", 00:19:07.522 "optimal_io_boundary": 0, 00:19:07.522 "md_size": 0, 00:19:07.522 "dif_type": 0, 00:19:07.522 "dif_is_head_of_md": false, 00:19:07.522 "dif_pi_format": 0 00:19:07.522 } 00:19:07.522 }, 00:19:07.522 { 00:19:07.522 "method": "bdev_wait_for_examine" 00:19:07.522 } 00:19:07.522 ] 00:19:07.522 }, 00:19:07.522 { 00:19:07.522 "subsystem": "nbd", 00:19:07.522 "config": [] 00:19:07.522 }, 00:19:07.522 { 00:19:07.522 "subsystem": "scheduler", 00:19:07.522 "config": [ 00:19:07.522 { 00:19:07.522 "method": "framework_set_scheduler", 00:19:07.522 "params": { 00:19:07.522 "name": "static" 00:19:07.522 } 00:19:07.522 } 00:19:07.522 ] 00:19:07.522 }, 00:19:07.522 { 00:19:07.522 "subsystem": "nvmf", 00:19:07.522 "config": [ 00:19:07.522 { 00:19:07.522 "method": "nvmf_set_config", 00:19:07.522 "params": { 00:19:07.522 "discovery_filter": "match_any", 00:19:07.522 "admin_cmd_passthru": { 00:19:07.522 "identify_ctrlr": false 00:19:07.522 }, 00:19:07.522 "dhchap_digests": [ 00:19:07.522 "sha256", 00:19:07.522 "sha384", 00:19:07.522 "sha512" 00:19:07.522 ], 00:19:07.522 "dhchap_dhgroups": [ 00:19:07.522 "null", 00:19:07.522 "ffdhe2048", 00:19:07.522 "ffdhe3072", 00:19:07.522 "ffdhe4096", 00:19:07.522 "ffdhe6144", 00:19:07.522 "ffdhe8192" 00:19:07.522 ] 00:19:07.522 } 00:19:07.522 }, 00:19:07.522 { 00:19:07.522 "method": "nvmf_set_max_subsystems", 00:19:07.522 "params": { 00:19:07.522 "max_subsystems": 1024 00:19:07.522 } 00:19:07.522 }, 00:19:07.522 { 00:19:07.522 "method": "nvmf_set_crdt", 00:19:07.522 "params": { 00:19:07.522 "crdt1": 0, 00:19:07.522 "crdt2": 0, 00:19:07.522 "crdt3": 0 00:19:07.522 } 00:19:07.522 }, 00:19:07.522 { 00:19:07.522 "method": "nvmf_create_transport", 00:19:07.522 "params": { 00:19:07.522 "trtype": "TCP", 00:19:07.522 "max_queue_depth": 128, 00:19:07.522 "max_io_qpairs_per_ctrlr": 127, 00:19:07.522 "in_capsule_data_size": 4096, 00:19:07.522 "max_io_size": 131072, 00:19:07.522 "io_unit_size": 131072, 00:19:07.522 "max_aq_depth": 128, 00:19:07.522 "num_shared_buffers": 511, 00:19:07.522 "buf_cache_size": 4294967295, 00:19:07.522 "dif_insert_or_strip": false, 00:19:07.522 "zcopy": false, 00:19:07.522 "c2h_success": false, 00:19:07.522 "sock_priority": 0, 00:19:07.522 "abort_timeout_sec": 1, 00:19:07.522 "ack_timeout": 0, 00:19:07.522 "data_wr_pool_size": 0 00:19:07.522 } 00:19:07.522 }, 00:19:07.522 { 00:19:07.522 "method": "nvmf_create_subsystem", 00:19:07.522 "params": { 00:19:07.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.522 "allow_any_host": false, 00:19:07.522 "serial_number": "SPDK00000000000001", 00:19:07.522 "model_number": "SPDK bdev Controller", 00:19:07.522 "max_namespaces": 10, 00:19:07.522 "min_cntlid": 1, 00:19:07.522 "max_cntlid": 65519, 00:19:07.522 "ana_reporting": false 00:19:07.522 } 00:19:07.522 }, 00:19:07.522 { 00:19:07.522 "method": "nvmf_subsystem_add_host", 00:19:07.522 "params": { 00:19:07.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.522 "host": "nqn.2016-06.io.spdk:host1", 00:19:07.522 "psk": "key0" 00:19:07.522 } 00:19:07.522 }, 00:19:07.522 { 00:19:07.522 "method": "nvmf_subsystem_add_ns", 00:19:07.522 "params": { 00:19:07.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.522 "namespace": { 00:19:07.522 "nsid": 1, 00:19:07.522 "bdev_name": "malloc0", 00:19:07.522 "nguid": "E7E5685D88C8405DAD3D6BBBD654797E", 00:19:07.522 "uuid": "e7e5685d-88c8-405d-ad3d-6bbbd654797e", 00:19:07.522 "no_auto_visible": false 00:19:07.522 } 00:19:07.522 } 00:19:07.522 }, 00:19:07.522 { 00:19:07.522 "method": "nvmf_subsystem_add_listener", 00:19:07.522 "params": { 00:19:07.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.522 "listen_address": { 00:19:07.522 "trtype": "TCP", 00:19:07.522 "adrfam": "IPv4", 00:19:07.522 "traddr": "10.0.0.2", 00:19:07.522 "trsvcid": "4420" 00:19:07.522 }, 00:19:07.522 "secure_channel": true 00:19:07.522 } 00:19:07.522 } 00:19:07.522 ] 00:19:07.522 } 00:19:07.522 ] 00:19:07.522 }' 00:19:07.522 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.522 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2283377 00:19:07.522 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2283377 00:19:07.522 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:07.522 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2283377 ']' 00:19:07.522 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.522 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.522 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.522 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.522 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.782 [2024-11-19 11:30:21.299151] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:07.782 [2024-11-19 11:30:21.299202] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.782 [2024-11-19 11:30:21.379183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.782 [2024-11-19 11:30:21.417730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.782 [2024-11-19 11:30:21.417766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.782 [2024-11-19 11:30:21.417775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.782 [2024-11-19 11:30:21.417782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.782 [2024-11-19 11:30:21.417788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.782 [2024-11-19 11:30:21.418384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.041 [2024-11-19 11:30:21.632208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.041 [2024-11-19 11:30:21.664227] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.041 [2024-11-19 11:30:21.664425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2283614 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2283614 /var/tmp/bdevperf.sock 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2283614 ']' 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.609 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:08.609 "subsystems": [ 00:19:08.609 { 00:19:08.609 "subsystem": "keyring", 00:19:08.609 "config": [ 00:19:08.609 { 00:19:08.609 "method": "keyring_file_add_key", 00:19:08.609 "params": { 00:19:08.609 "name": "key0", 00:19:08.609 "path": "/tmp/tmp.66Q0kCvEFX" 00:19:08.609 } 00:19:08.609 } 00:19:08.609 ] 00:19:08.609 }, 00:19:08.609 { 00:19:08.609 "subsystem": "iobuf", 00:19:08.609 "config": [ 00:19:08.609 { 00:19:08.609 "method": "iobuf_set_options", 00:19:08.609 "params": { 00:19:08.609 "small_pool_count": 8192, 00:19:08.609 "large_pool_count": 1024, 00:19:08.609 "small_bufsize": 8192, 00:19:08.609 "large_bufsize": 135168, 00:19:08.609 "enable_numa": false 00:19:08.609 } 00:19:08.609 } 00:19:08.609 ] 00:19:08.609 }, 00:19:08.609 { 00:19:08.609 "subsystem": "sock", 00:19:08.609 "config": [ 00:19:08.609 { 00:19:08.609 "method": "sock_set_default_impl", 00:19:08.609 "params": { 00:19:08.609 "impl_name": "posix" 00:19:08.609 } 00:19:08.609 }, 00:19:08.609 { 00:19:08.609 "method": "sock_impl_set_options", 00:19:08.609 "params": { 00:19:08.609 "impl_name": "ssl", 00:19:08.609 "recv_buf_size": 4096, 00:19:08.609 "send_buf_size": 4096, 00:19:08.609 "enable_recv_pipe": true, 00:19:08.609 "enable_quickack": false, 00:19:08.609 "enable_placement_id": 0, 00:19:08.609 "enable_zerocopy_send_server": true, 00:19:08.609 "enable_zerocopy_send_client": false, 00:19:08.609 "zerocopy_threshold": 0, 00:19:08.609 "tls_version": 0, 00:19:08.609 "enable_ktls": false 00:19:08.609 } 00:19:08.609 }, 00:19:08.609 { 00:19:08.609 "method": "sock_impl_set_options", 00:19:08.609 "params": { 00:19:08.609 "impl_name": "posix", 00:19:08.609 "recv_buf_size": 2097152, 00:19:08.609 "send_buf_size": 2097152, 00:19:08.609 "enable_recv_pipe": true, 00:19:08.609 "enable_quickack": false, 00:19:08.609 "enable_placement_id": 0, 00:19:08.609 "enable_zerocopy_send_server": true, 00:19:08.609 "enable_zerocopy_send_client": false, 00:19:08.609 "zerocopy_threshold": 0, 00:19:08.609 "tls_version": 0, 00:19:08.609 "enable_ktls": false 00:19:08.609 } 00:19:08.609 } 00:19:08.609 ] 00:19:08.609 }, 00:19:08.609 { 00:19:08.609 "subsystem": "vmd", 00:19:08.609 "config": [] 00:19:08.609 }, 00:19:08.609 { 00:19:08.609 "subsystem": "accel", 00:19:08.609 "config": [ 00:19:08.609 { 00:19:08.609 "method": "accel_set_options", 00:19:08.609 "params": { 00:19:08.609 "small_cache_size": 128, 00:19:08.609 "large_cache_size": 16, 00:19:08.609 "task_count": 2048, 00:19:08.609 "sequence_count": 2048, 00:19:08.609 "buf_count": 2048 00:19:08.609 } 00:19:08.609 } 00:19:08.609 ] 00:19:08.609 }, 00:19:08.609 { 00:19:08.609 "subsystem": "bdev", 00:19:08.609 "config": [ 00:19:08.609 { 00:19:08.609 "method": "bdev_set_options", 00:19:08.609 "params": { 00:19:08.609 "bdev_io_pool_size": 65535, 00:19:08.609 "bdev_io_cache_size": 256, 00:19:08.609 "bdev_auto_examine": true, 00:19:08.609 "iobuf_small_cache_size": 128, 00:19:08.609 "iobuf_large_cache_size": 16 00:19:08.609 } 00:19:08.609 }, 00:19:08.609 { 00:19:08.609 "method": "bdev_raid_set_options", 00:19:08.609 "params": { 00:19:08.609 "process_window_size_kb": 1024, 00:19:08.609 "process_max_bandwidth_mb_sec": 0 00:19:08.609 } 00:19:08.609 }, 00:19:08.609 { 00:19:08.609 "method": "bdev_iscsi_set_options", 00:19:08.609 "params": { 00:19:08.609 "timeout_sec": 30 00:19:08.609 } 00:19:08.609 }, 00:19:08.609 { 00:19:08.609 "method": "bdev_nvme_set_options", 00:19:08.609 "params": { 00:19:08.609 "action_on_timeout": "none", 00:19:08.609 "timeout_us": 0, 00:19:08.609 "timeout_admin_us": 0, 00:19:08.609 "keep_alive_timeout_ms": 10000, 00:19:08.609 "arbitration_burst": 0, 00:19:08.609 "low_priority_weight": 0, 00:19:08.609 "medium_priority_weight": 0, 00:19:08.609 "high_priority_weight": 0, 00:19:08.609 "nvme_adminq_poll_period_us": 10000, 00:19:08.609 "nvme_ioq_poll_period_us": 0, 00:19:08.609 "io_queue_requests": 512, 00:19:08.609 "delay_cmd_submit": true, 00:19:08.609 "transport_retry_count": 4, 00:19:08.609 "bdev_retry_count": 3, 00:19:08.609 "transport_ack_timeout": 0, 00:19:08.609 "ctrlr_loss_timeout_sec": 0, 00:19:08.609 "reconnect_delay_sec": 0, 00:19:08.609 "fast_io_fail_timeout_sec": 0, 00:19:08.609 "disable_auto_failback": false, 00:19:08.609 "generate_uuids": false, 00:19:08.609 "transport_tos": 0, 00:19:08.609 "nvme_error_stat": false, 00:19:08.609 "rdma_srq_size": 0, 00:19:08.609 "io_path_stat": false, 00:19:08.609 "allow_accel_sequence": false, 00:19:08.609 "rdma_max_cq_size": 0, 00:19:08.609 "rdma_cm_event_timeout_ms": 0, 00:19:08.609 "dhchap_digests": [ 00:19:08.609 "sha256", 00:19:08.609 "sha384", 00:19:08.609 "sha512" 00:19:08.609 ], 00:19:08.609 "dhchap_dhgroups": [ 00:19:08.609 "null", 00:19:08.609 "ffdhe2048", 00:19:08.609 "ffdhe3072", 00:19:08.609 "ffdhe4096", 00:19:08.609 "ffdhe6144", 00:19:08.609 "ffdhe8192" 00:19:08.609 ] 00:19:08.609 } 00:19:08.609 }, 00:19:08.609 { 00:19:08.609 "method": "bdev_nvme_attach_controller", 00:19:08.609 "params": { 00:19:08.609 "name": "TLSTEST", 00:19:08.609 "trtype": "TCP", 00:19:08.609 "adrfam": "IPv4", 00:19:08.609 "traddr": "10.0.0.2", 00:19:08.609 "trsvcid": "4420", 00:19:08.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.609 "prchk_reftag": false, 00:19:08.609 "prchk_guard": false, 00:19:08.609 "ctrlr_loss_timeout_sec": 0, 00:19:08.609 "reconnect_delay_sec": 0, 00:19:08.609 "fast_io_fail_timeout_sec": 0, 00:19:08.609 "psk": "key0", 00:19:08.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.609 "hdgst": false, 00:19:08.610 "ddgst": false, 00:19:08.610 "multipath": "multipath" 00:19:08.610 } 00:19:08.610 }, 00:19:08.610 { 00:19:08.610 "method": "bdev_nvme_set_hotplug", 00:19:08.610 "params": { 00:19:08.610 "period_us": 100000, 00:19:08.610 "enable": false 00:19:08.610 } 00:19:08.610 }, 00:19:08.610 { 00:19:08.610 "method": "bdev_wait_for_examine" 00:19:08.610 } 00:19:08.610 ] 00:19:08.610 }, 00:19:08.610 { 00:19:08.610 "subsystem": "nbd", 00:19:08.610 "config": [] 00:19:08.610 } 00:19:08.610 ] 00:19:08.610 }' 00:19:08.610 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.610 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.610 [2024-11-19 11:30:22.225428] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:08.610 [2024-11-19 11:30:22.225477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283614 ] 00:19:08.610 [2024-11-19 11:30:22.299808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.610 [2024-11-19 11:30:22.340328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.868 [2024-11-19 11:30:22.493267] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.434 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.434 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:09.434 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:09.434 Running I/O for 10 seconds... 00:19:11.741 5252.00 IOPS, 20.52 MiB/s [2024-11-19T10:30:26.460Z] 5326.00 IOPS, 20.80 MiB/s [2024-11-19T10:30:27.397Z] 5376.33 IOPS, 21.00 MiB/s [2024-11-19T10:30:28.412Z] 5401.00 IOPS, 21.10 MiB/s [2024-11-19T10:30:29.347Z] 5423.00 IOPS, 21.18 MiB/s [2024-11-19T10:30:30.281Z] 5434.00 IOPS, 21.23 MiB/s [2024-11-19T10:30:31.216Z] 5433.86 IOPS, 21.23 MiB/s [2024-11-19T10:30:32.591Z] 5425.00 IOPS, 21.19 MiB/s [2024-11-19T10:30:33.527Z] 5416.00 IOPS, 21.16 MiB/s [2024-11-19T10:30:33.527Z] 5422.40 IOPS, 21.18 MiB/s 00:19:19.746 Latency(us) 00:19:19.746 [2024-11-19T10:30:33.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.746 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:19.746 Verification LBA range: start 0x0 length 0x2000 00:19:19.746 TLSTESTn1 : 10.01 5427.44 21.20 0.00 0.00 23550.37 5014.93 23820.91 00:19:19.746 [2024-11-19T10:30:33.527Z] =================================================================================================================== 00:19:19.746 [2024-11-19T10:30:33.527Z] Total : 5427.44 21.20 0.00 0.00 23550.37 5014.93 23820.91 00:19:19.746 { 00:19:19.746 "results": [ 00:19:19.746 { 00:19:19.746 "job": "TLSTESTn1", 00:19:19.746 "core_mask": "0x4", 00:19:19.746 "workload": "verify", 00:19:19.746 "status": "finished", 00:19:19.746 "verify_range": { 00:19:19.746 "start": 0, 00:19:19.746 "length": 8192 00:19:19.746 }, 00:19:19.746 "queue_depth": 128, 00:19:19.746 "io_size": 4096, 00:19:19.746 "runtime": 10.01392, 00:19:19.746 "iops": 5427.444996564782, 00:19:19.746 "mibps": 21.20095701783118, 00:19:19.746 "io_failed": 0, 00:19:19.746 "io_timeout": 0, 00:19:19.746 "avg_latency_us": 23550.371167873287, 00:19:19.746 "min_latency_us": 5014.928695652174, 00:19:19.746 "max_latency_us": 23820.911304347825 00:19:19.746 } 00:19:19.746 ], 00:19:19.746 "core_count": 1 00:19:19.746 } 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2283614 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2283614 ']' 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2283614 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2283614 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2283614' 00:19:19.746 killing process with pid 2283614 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2283614 00:19:19.746 Received shutdown signal, test time was about 10.000000 seconds 00:19:19.746 00:19:19.746 Latency(us) 00:19:19.746 [2024-11-19T10:30:33.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.746 [2024-11-19T10:30:33.527Z] =================================================================================================================== 00:19:19.746 [2024-11-19T10:30:33.527Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2283614 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2283377 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2283377 ']' 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2283377 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2283377 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2283377' 00:19:19.746 killing process with pid 2283377 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2283377 00:19:19.746 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2283377 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2285461 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2285461 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2285461 ']' 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.005 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.005 [2024-11-19 11:30:33.721409] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:20.005 [2024-11-19 11:30:33.721459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.264 [2024-11-19 11:30:33.796255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.264 [2024-11-19 11:30:33.833736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.264 [2024-11-19 11:30:33.833770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.264 [2024-11-19 11:30:33.833777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.264 [2024-11-19 11:30:33.833782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.264 [2024-11-19 11:30:33.833787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.264 [2024-11-19 11:30:33.834333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.264 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.264 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:20.264 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.264 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.264 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.264 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.264 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.66Q0kCvEFX 00:19:20.264 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.66Q0kCvEFX 00:19:20.264 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:20.523 [2024-11-19 11:30:34.145609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.523 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:20.781 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:20.781 [2024-11-19 11:30:34.558695] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.781 [2024-11-19 11:30:34.558885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.039 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:21.039 malloc0 00:19:21.039 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:21.298 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.66Q0kCvEFX 00:19:21.557 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:21.816 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:21.816 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2285722 00:19:21.816 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.816 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2285722 /var/tmp/bdevperf.sock 00:19:21.816 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2285722 ']' 00:19:21.816 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.816 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.816 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.816 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.816 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.816 [2024-11-19 11:30:35.414169] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:21.816 [2024-11-19 11:30:35.414215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285722 ] 00:19:21.816 [2024-11-19 11:30:35.487521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.816 [2024-11-19 11:30:35.529508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.075 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.075 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:22.075 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.66Q0kCvEFX 00:19:22.075 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:22.333 [2024-11-19 11:30:36.005863] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.333 nvme0n1 00:19:22.333 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:22.592 Running I/O for 1 seconds... 00:19:23.528 5397.00 IOPS, 21.08 MiB/s 00:19:23.528 Latency(us) 00:19:23.528 [2024-11-19T10:30:37.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.528 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:23.528 Verification LBA range: start 0x0 length 0x2000 00:19:23.528 nvme0n1 : 1.01 5447.95 21.28 0.00 0.00 23335.02 4929.45 22795.13 00:19:23.528 [2024-11-19T10:30:37.309Z] =================================================================================================================== 00:19:23.528 [2024-11-19T10:30:37.309Z] Total : 5447.95 21.28 0.00 0.00 23335.02 4929.45 22795.13 00:19:23.528 { 00:19:23.528 "results": [ 00:19:23.528 { 00:19:23.528 "job": "nvme0n1", 00:19:23.528 "core_mask": "0x2", 00:19:23.528 "workload": "verify", 00:19:23.528 "status": "finished", 00:19:23.528 "verify_range": { 00:19:23.528 "start": 0, 00:19:23.528 "length": 8192 00:19:23.528 }, 00:19:23.528 "queue_depth": 128, 00:19:23.528 "io_size": 4096, 00:19:23.528 "runtime": 1.014326, 00:19:23.528 "iops": 5447.952630613826, 00:19:23.528 "mibps": 21.281064963335258, 00:19:23.528 "io_failed": 0, 00:19:23.528 "io_timeout": 0, 00:19:23.528 "avg_latency_us": 23335.01686698453, 00:19:23.528 "min_latency_us": 4929.446956521739, 00:19:23.528 "max_latency_us": 22795.130434782608 00:19:23.528 } 00:19:23.528 ], 00:19:23.528 "core_count": 1 00:19:23.528 } 00:19:23.528 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2285722 00:19:23.528 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2285722 ']' 00:19:23.528 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2285722 00:19:23.528 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.528 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.528 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2285722 00:19:23.528 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:23.528 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:23.528 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2285722' 00:19:23.528 killing process with pid 2285722 00:19:23.528 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2285722 00:19:23.528 Received shutdown signal, test time was about 1.000000 seconds 00:19:23.528 00:19:23.528 Latency(us) 00:19:23.528 [2024-11-19T10:30:37.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.528 [2024-11-19T10:30:37.309Z] =================================================================================================================== 00:19:23.528 [2024-11-19T10:30:37.309Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.528 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2285722 00:19:23.787 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2285461 00:19:23.787 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2285461 ']' 00:19:23.787 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2285461 00:19:23.787 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.787 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.787 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2285461 00:19:23.787 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.787 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.787 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2285461' 00:19:23.787 killing process with pid 2285461 00:19:23.787 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2285461 00:19:23.787 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2285461 00:19:24.046 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2286185 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2286185 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2286185 ']' 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.047 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.047 [2024-11-19 11:30:37.723099] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:24.047 [2024-11-19 11:30:37.723144] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.047 [2024-11-19 11:30:37.801045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.306 [2024-11-19 11:30:37.838101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.306 [2024-11-19 11:30:37.838134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.306 [2024-11-19 11:30:37.838141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.306 [2024-11-19 11:30:37.838148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.306 [2024-11-19 11:30:37.838153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.306 [2024-11-19 11:30:37.838719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.306 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.306 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:24.306 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.306 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.306 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.306 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.306 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:24.306 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.306 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.306 [2024-11-19 11:30:37.982280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.306 malloc0 00:19:24.306 [2024-11-19 11:30:38.010561] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:24.306 [2024-11-19 11:30:38.010756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.306 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.306 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2286217 00:19:24.306 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:24.306 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2286217 /var/tmp/bdevperf.sock 00:19:24.306 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2286217 ']' 00:19:24.306 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.306 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.306 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.306 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.306 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.306 [2024-11-19 11:30:38.084662] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:24.306 [2024-11-19 11:30:38.084705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286217 ] 00:19:24.566 [2024-11-19 11:30:38.157229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.566 [2024-11-19 11:30:38.199801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.566 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.566 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:24.566 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.66Q0kCvEFX 00:19:24.824 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:25.083 [2024-11-19 11:30:38.659121] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.083 nvme0n1 00:19:25.083 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:25.083 Running I/O for 1 seconds... 00:19:26.458 5298.00 IOPS, 20.70 MiB/s 00:19:26.458 Latency(us) 00:19:26.458 [2024-11-19T10:30:40.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.458 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:26.458 Verification LBA range: start 0x0 length 0x2000 00:19:26.458 nvme0n1 : 1.01 5354.70 20.92 0.00 0.00 23740.96 5043.42 24162.84 00:19:26.458 [2024-11-19T10:30:40.239Z] =================================================================================================================== 00:19:26.458 [2024-11-19T10:30:40.239Z] Total : 5354.70 20.92 0.00 0.00 23740.96 5043.42 24162.84 00:19:26.458 { 00:19:26.458 "results": [ 00:19:26.458 { 00:19:26.458 "job": "nvme0n1", 00:19:26.458 "core_mask": "0x2", 00:19:26.458 "workload": "verify", 00:19:26.458 "status": "finished", 00:19:26.458 "verify_range": { 00:19:26.458 "start": 0, 00:19:26.458 "length": 8192 00:19:26.458 }, 00:19:26.458 "queue_depth": 128, 00:19:26.458 "io_size": 4096, 00:19:26.458 "runtime": 1.013316, 00:19:26.458 "iops": 5354.696856656758, 00:19:26.458 "mibps": 20.916784596315463, 00:19:26.458 "io_failed": 0, 00:19:26.458 "io_timeout": 0, 00:19:26.458 "avg_latency_us": 23740.959253834197, 00:19:26.458 "min_latency_us": 5043.422608695652, 00:19:26.458 "max_latency_us": 24162.838260869565 00:19:26.458 } 00:19:26.458 ], 00:19:26.458 "core_count": 1 00:19:26.458 } 00:19:26.458 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:26.458 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.458 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.458 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.458 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:26.458 "subsystems": [ 00:19:26.458 { 00:19:26.458 "subsystem": "keyring", 00:19:26.458 "config": [ 00:19:26.458 { 00:19:26.458 "method": "keyring_file_add_key", 00:19:26.458 "params": { 00:19:26.458 "name": "key0", 00:19:26.458 "path": "/tmp/tmp.66Q0kCvEFX" 00:19:26.458 } 00:19:26.458 } 00:19:26.458 ] 00:19:26.458 }, 00:19:26.458 { 00:19:26.458 "subsystem": "iobuf", 00:19:26.458 "config": [ 00:19:26.458 { 00:19:26.458 "method": "iobuf_set_options", 00:19:26.458 "params": { 00:19:26.458 "small_pool_count": 8192, 00:19:26.458 "large_pool_count": 1024, 00:19:26.458 "small_bufsize": 8192, 00:19:26.458 "large_bufsize": 135168, 00:19:26.458 "enable_numa": false 00:19:26.458 } 00:19:26.458 } 00:19:26.458 ] 00:19:26.458 }, 00:19:26.458 { 00:19:26.458 "subsystem": "sock", 00:19:26.458 "config": [ 00:19:26.458 { 00:19:26.458 "method": "sock_set_default_impl", 00:19:26.458 "params": { 00:19:26.458 "impl_name": "posix" 00:19:26.458 } 00:19:26.458 }, 00:19:26.458 { 00:19:26.458 "method": "sock_impl_set_options", 00:19:26.458 "params": { 00:19:26.458 "impl_name": "ssl", 00:19:26.458 "recv_buf_size": 4096, 00:19:26.458 "send_buf_size": 4096, 00:19:26.459 "enable_recv_pipe": true, 00:19:26.459 "enable_quickack": false, 00:19:26.459 "enable_placement_id": 0, 00:19:26.459 "enable_zerocopy_send_server": true, 00:19:26.459 "enable_zerocopy_send_client": false, 00:19:26.459 "zerocopy_threshold": 0, 00:19:26.459 "tls_version": 0, 00:19:26.459 "enable_ktls": false 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "sock_impl_set_options", 00:19:26.459 "params": { 00:19:26.459 "impl_name": "posix", 00:19:26.459 "recv_buf_size": 2097152, 00:19:26.459 "send_buf_size": 2097152, 00:19:26.459 "enable_recv_pipe": true, 00:19:26.459 "enable_quickack": false, 00:19:26.459 "enable_placement_id": 0, 00:19:26.459 "enable_zerocopy_send_server": true, 00:19:26.459 "enable_zerocopy_send_client": false, 00:19:26.459 "zerocopy_threshold": 0, 00:19:26.459 "tls_version": 0, 00:19:26.459 "enable_ktls": false 00:19:26.459 } 00:19:26.459 } 00:19:26.459 ] 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "subsystem": "vmd", 00:19:26.459 "config": [] 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "subsystem": "accel", 00:19:26.459 "config": [ 00:19:26.459 { 00:19:26.459 "method": "accel_set_options", 00:19:26.459 "params": { 00:19:26.459 "small_cache_size": 128, 00:19:26.459 "large_cache_size": 16, 00:19:26.459 "task_count": 2048, 00:19:26.459 "sequence_count": 2048, 00:19:26.459 "buf_count": 2048 00:19:26.459 } 00:19:26.459 } 00:19:26.459 ] 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "subsystem": "bdev", 00:19:26.459 "config": [ 00:19:26.459 { 00:19:26.459 "method": "bdev_set_options", 00:19:26.459 "params": { 00:19:26.459 "bdev_io_pool_size": 65535, 00:19:26.459 "bdev_io_cache_size": 256, 00:19:26.459 "bdev_auto_examine": true, 00:19:26.459 "iobuf_small_cache_size": 128, 00:19:26.459 "iobuf_large_cache_size": 16 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "bdev_raid_set_options", 00:19:26.459 "params": { 00:19:26.459 "process_window_size_kb": 1024, 00:19:26.459 "process_max_bandwidth_mb_sec": 0 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "bdev_iscsi_set_options", 00:19:26.459 "params": { 00:19:26.459 "timeout_sec": 30 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "bdev_nvme_set_options", 00:19:26.459 "params": { 00:19:26.459 "action_on_timeout": "none", 00:19:26.459 "timeout_us": 0, 00:19:26.459 "timeout_admin_us": 0, 00:19:26.459 "keep_alive_timeout_ms": 10000, 00:19:26.459 "arbitration_burst": 0, 00:19:26.459 "low_priority_weight": 0, 00:19:26.459 "medium_priority_weight": 0, 00:19:26.459 "high_priority_weight": 0, 00:19:26.459 "nvme_adminq_poll_period_us": 10000, 00:19:26.459 "nvme_ioq_poll_period_us": 0, 00:19:26.459 "io_queue_requests": 0, 00:19:26.459 "delay_cmd_submit": true, 00:19:26.459 "transport_retry_count": 4, 00:19:26.459 "bdev_retry_count": 3, 00:19:26.459 "transport_ack_timeout": 0, 00:19:26.459 "ctrlr_loss_timeout_sec": 0, 00:19:26.459 "reconnect_delay_sec": 0, 00:19:26.459 "fast_io_fail_timeout_sec": 0, 00:19:26.459 "disable_auto_failback": false, 00:19:26.459 "generate_uuids": false, 00:19:26.459 "transport_tos": 0, 00:19:26.459 "nvme_error_stat": false, 00:19:26.459 "rdma_srq_size": 0, 00:19:26.459 "io_path_stat": false, 00:19:26.459 "allow_accel_sequence": false, 00:19:26.459 "rdma_max_cq_size": 0, 00:19:26.459 "rdma_cm_event_timeout_ms": 0, 00:19:26.459 "dhchap_digests": [ 00:19:26.459 "sha256", 00:19:26.459 "sha384", 00:19:26.459 "sha512" 00:19:26.459 ], 00:19:26.459 "dhchap_dhgroups": [ 00:19:26.459 "null", 00:19:26.459 "ffdhe2048", 00:19:26.459 "ffdhe3072", 00:19:26.459 "ffdhe4096", 00:19:26.459 "ffdhe6144", 00:19:26.459 "ffdhe8192" 00:19:26.459 ] 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "bdev_nvme_set_hotplug", 00:19:26.459 "params": { 00:19:26.459 "period_us": 100000, 00:19:26.459 "enable": false 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "bdev_malloc_create", 00:19:26.459 "params": { 00:19:26.459 "name": "malloc0", 00:19:26.459 "num_blocks": 8192, 00:19:26.459 "block_size": 4096, 00:19:26.459 "physical_block_size": 4096, 00:19:26.459 "uuid": "353bd755-4428-4129-9a2c-9f7ed23c63fe", 00:19:26.459 "optimal_io_boundary": 0, 00:19:26.459 "md_size": 0, 00:19:26.459 "dif_type": 0, 00:19:26.459 "dif_is_head_of_md": false, 00:19:26.459 "dif_pi_format": 0 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "bdev_wait_for_examine" 00:19:26.459 } 00:19:26.459 ] 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "subsystem": "nbd", 00:19:26.459 "config": [] 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "subsystem": "scheduler", 00:19:26.459 "config": [ 00:19:26.459 { 00:19:26.459 "method": "framework_set_scheduler", 00:19:26.459 "params": { 00:19:26.459 "name": "static" 00:19:26.459 } 00:19:26.459 } 00:19:26.459 ] 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "subsystem": "nvmf", 00:19:26.459 "config": [ 00:19:26.459 { 00:19:26.459 "method": "nvmf_set_config", 00:19:26.459 "params": { 00:19:26.459 "discovery_filter": "match_any", 00:19:26.459 "admin_cmd_passthru": { 00:19:26.459 "identify_ctrlr": false 00:19:26.459 }, 00:19:26.459 "dhchap_digests": [ 00:19:26.459 "sha256", 00:19:26.459 "sha384", 00:19:26.459 "sha512" 00:19:26.459 ], 00:19:26.459 "dhchap_dhgroups": [ 00:19:26.459 "null", 00:19:26.459 "ffdhe2048", 00:19:26.459 "ffdhe3072", 00:19:26.459 "ffdhe4096", 00:19:26.459 "ffdhe6144", 00:19:26.459 "ffdhe8192" 00:19:26.459 ] 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "nvmf_set_max_subsystems", 00:19:26.459 "params": { 00:19:26.459 "max_subsystems": 1024 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "nvmf_set_crdt", 00:19:26.459 "params": { 00:19:26.459 "crdt1": 0, 00:19:26.459 "crdt2": 0, 00:19:26.459 "crdt3": 0 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "nvmf_create_transport", 00:19:26.459 "params": { 00:19:26.459 "trtype": "TCP", 00:19:26.459 "max_queue_depth": 128, 00:19:26.459 "max_io_qpairs_per_ctrlr": 127, 00:19:26.459 "in_capsule_data_size": 4096, 00:19:26.459 "max_io_size": 131072, 00:19:26.459 "io_unit_size": 131072, 00:19:26.459 "max_aq_depth": 128, 00:19:26.459 "num_shared_buffers": 511, 00:19:26.459 "buf_cache_size": 4294967295, 00:19:26.459 "dif_insert_or_strip": false, 00:19:26.459 "zcopy": false, 00:19:26.459 "c2h_success": false, 00:19:26.459 "sock_priority": 0, 00:19:26.459 "abort_timeout_sec": 1, 00:19:26.459 "ack_timeout": 0, 00:19:26.459 "data_wr_pool_size": 0 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "nvmf_create_subsystem", 00:19:26.459 "params": { 00:19:26.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.459 "allow_any_host": false, 00:19:26.459 "serial_number": "00000000000000000000", 00:19:26.459 "model_number": "SPDK bdev Controller", 00:19:26.459 "max_namespaces": 32, 00:19:26.459 "min_cntlid": 1, 00:19:26.459 "max_cntlid": 65519, 00:19:26.459 "ana_reporting": false 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "nvmf_subsystem_add_host", 00:19:26.459 "params": { 00:19:26.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.459 "host": "nqn.2016-06.io.spdk:host1", 00:19:26.459 "psk": "key0" 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "nvmf_subsystem_add_ns", 00:19:26.459 "params": { 00:19:26.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.459 "namespace": { 00:19:26.459 "nsid": 1, 00:19:26.459 "bdev_name": "malloc0", 00:19:26.459 "nguid": "353BD755442841299A2C9F7ED23C63FE", 00:19:26.459 "uuid": "353bd755-4428-4129-9a2c-9f7ed23c63fe", 00:19:26.459 "no_auto_visible": false 00:19:26.459 } 00:19:26.459 } 00:19:26.459 }, 00:19:26.459 { 00:19:26.459 "method": "nvmf_subsystem_add_listener", 00:19:26.459 "params": { 00:19:26.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.459 "listen_address": { 00:19:26.459 "trtype": "TCP", 00:19:26.459 "adrfam": "IPv4", 00:19:26.459 "traddr": "10.0.0.2", 00:19:26.459 "trsvcid": "4420" 00:19:26.459 }, 00:19:26.459 "secure_channel": false, 00:19:26.459 "sock_impl": "ssl" 00:19:26.459 } 00:19:26.459 } 00:19:26.459 ] 00:19:26.459 } 00:19:26.459 ] 00:19:26.459 }' 00:19:26.459 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:26.719 "subsystems": [ 00:19:26.719 { 00:19:26.719 "subsystem": "keyring", 00:19:26.719 "config": [ 00:19:26.719 { 00:19:26.719 "method": "keyring_file_add_key", 00:19:26.719 "params": { 00:19:26.719 "name": "key0", 00:19:26.719 "path": "/tmp/tmp.66Q0kCvEFX" 00:19:26.719 } 00:19:26.719 } 00:19:26.719 ] 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "subsystem": "iobuf", 00:19:26.719 "config": [ 00:19:26.719 { 00:19:26.719 "method": "iobuf_set_options", 00:19:26.719 "params": { 00:19:26.719 "small_pool_count": 8192, 00:19:26.719 "large_pool_count": 1024, 00:19:26.719 "small_bufsize": 8192, 00:19:26.719 "large_bufsize": 135168, 00:19:26.719 "enable_numa": false 00:19:26.719 } 00:19:26.719 } 00:19:26.719 ] 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "subsystem": "sock", 00:19:26.719 "config": [ 00:19:26.719 { 00:19:26.719 "method": "sock_set_default_impl", 00:19:26.719 "params": { 00:19:26.719 "impl_name": "posix" 00:19:26.719 } 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "method": "sock_impl_set_options", 00:19:26.719 "params": { 00:19:26.719 "impl_name": "ssl", 00:19:26.719 "recv_buf_size": 4096, 00:19:26.719 "send_buf_size": 4096, 00:19:26.719 "enable_recv_pipe": true, 00:19:26.719 "enable_quickack": false, 00:19:26.719 "enable_placement_id": 0, 00:19:26.719 "enable_zerocopy_send_server": true, 00:19:26.719 "enable_zerocopy_send_client": false, 00:19:26.719 "zerocopy_threshold": 0, 00:19:26.719 "tls_version": 0, 00:19:26.719 "enable_ktls": false 00:19:26.719 } 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "method": "sock_impl_set_options", 00:19:26.719 "params": { 00:19:26.719 "impl_name": "posix", 00:19:26.719 "recv_buf_size": 2097152, 00:19:26.719 "send_buf_size": 2097152, 00:19:26.719 "enable_recv_pipe": true, 00:19:26.719 "enable_quickack": false, 00:19:26.719 "enable_placement_id": 0, 00:19:26.719 "enable_zerocopy_send_server": true, 00:19:26.719 "enable_zerocopy_send_client": false, 00:19:26.719 "zerocopy_threshold": 0, 00:19:26.719 "tls_version": 0, 00:19:26.719 "enable_ktls": false 00:19:26.719 } 00:19:26.719 } 00:19:26.719 ] 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "subsystem": "vmd", 00:19:26.719 "config": [] 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "subsystem": "accel", 00:19:26.719 "config": [ 00:19:26.719 { 00:19:26.719 "method": "accel_set_options", 00:19:26.719 "params": { 00:19:26.719 "small_cache_size": 128, 00:19:26.719 "large_cache_size": 16, 00:19:26.719 "task_count": 2048, 00:19:26.719 "sequence_count": 2048, 00:19:26.719 "buf_count": 2048 00:19:26.719 } 00:19:26.719 } 00:19:26.719 ] 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "subsystem": "bdev", 00:19:26.719 "config": [ 00:19:26.719 { 00:19:26.719 "method": "bdev_set_options", 00:19:26.719 "params": { 00:19:26.719 "bdev_io_pool_size": 65535, 00:19:26.719 "bdev_io_cache_size": 256, 00:19:26.719 "bdev_auto_examine": true, 00:19:26.719 "iobuf_small_cache_size": 128, 00:19:26.719 "iobuf_large_cache_size": 16 00:19:26.719 } 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "method": "bdev_raid_set_options", 00:19:26.719 "params": { 00:19:26.719 "process_window_size_kb": 1024, 00:19:26.719 "process_max_bandwidth_mb_sec": 0 00:19:26.719 } 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "method": "bdev_iscsi_set_options", 00:19:26.719 "params": { 00:19:26.719 "timeout_sec": 30 00:19:26.719 } 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "method": "bdev_nvme_set_options", 00:19:26.719 "params": { 00:19:26.719 "action_on_timeout": "none", 00:19:26.719 "timeout_us": 0, 00:19:26.719 "timeout_admin_us": 0, 00:19:26.719 "keep_alive_timeout_ms": 10000, 00:19:26.719 "arbitration_burst": 0, 00:19:26.719 "low_priority_weight": 0, 00:19:26.719 "medium_priority_weight": 0, 00:19:26.719 "high_priority_weight": 0, 00:19:26.719 "nvme_adminq_poll_period_us": 10000, 00:19:26.719 "nvme_ioq_poll_period_us": 0, 00:19:26.719 "io_queue_requests": 512, 00:19:26.719 "delay_cmd_submit": true, 00:19:26.719 "transport_retry_count": 4, 00:19:26.719 "bdev_retry_count": 3, 00:19:26.719 "transport_ack_timeout": 0, 00:19:26.719 "ctrlr_loss_timeout_sec": 0, 00:19:26.719 "reconnect_delay_sec": 0, 00:19:26.719 "fast_io_fail_timeout_sec": 0, 00:19:26.719 "disable_auto_failback": false, 00:19:26.719 "generate_uuids": false, 00:19:26.719 "transport_tos": 0, 00:19:26.719 "nvme_error_stat": false, 00:19:26.719 "rdma_srq_size": 0, 00:19:26.719 "io_path_stat": false, 00:19:26.719 "allow_accel_sequence": false, 00:19:26.719 "rdma_max_cq_size": 0, 00:19:26.719 "rdma_cm_event_timeout_ms": 0, 00:19:26.719 "dhchap_digests": [ 00:19:26.719 "sha256", 00:19:26.719 "sha384", 00:19:26.719 "sha512" 00:19:26.719 ], 00:19:26.719 "dhchap_dhgroups": [ 00:19:26.719 "null", 00:19:26.719 "ffdhe2048", 00:19:26.719 "ffdhe3072", 00:19:26.719 "ffdhe4096", 00:19:26.719 "ffdhe6144", 00:19:26.719 "ffdhe8192" 00:19:26.719 ] 00:19:26.719 } 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "method": "bdev_nvme_attach_controller", 00:19:26.719 "params": { 00:19:26.719 "name": "nvme0", 00:19:26.719 "trtype": "TCP", 00:19:26.719 "adrfam": "IPv4", 00:19:26.719 "traddr": "10.0.0.2", 00:19:26.719 "trsvcid": "4420", 00:19:26.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.719 "prchk_reftag": false, 00:19:26.719 "prchk_guard": false, 00:19:26.719 "ctrlr_loss_timeout_sec": 0, 00:19:26.719 "reconnect_delay_sec": 0, 00:19:26.719 "fast_io_fail_timeout_sec": 0, 00:19:26.719 "psk": "key0", 00:19:26.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.719 "hdgst": false, 00:19:26.719 "ddgst": false, 00:19:26.719 "multipath": "multipath" 00:19:26.719 } 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "method": "bdev_nvme_set_hotplug", 00:19:26.719 "params": { 00:19:26.719 "period_us": 100000, 00:19:26.719 "enable": false 00:19:26.719 } 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "method": "bdev_enable_histogram", 00:19:26.719 "params": { 00:19:26.719 "name": "nvme0n1", 00:19:26.719 "enable": true 00:19:26.719 } 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "method": "bdev_wait_for_examine" 00:19:26.719 } 00:19:26.719 ] 00:19:26.719 }, 00:19:26.719 { 00:19:26.719 "subsystem": "nbd", 00:19:26.719 "config": [] 00:19:26.719 } 00:19:26.719 ] 00:19:26.719 }' 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2286217 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2286217 ']' 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2286217 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286217 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286217' 00:19:26.719 killing process with pid 2286217 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2286217 00:19:26.719 Received shutdown signal, test time was about 1.000000 seconds 00:19:26.719 00:19:26.719 Latency(us) 00:19:26.719 [2024-11-19T10:30:40.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.719 [2024-11-19T10:30:40.500Z] =================================================================================================================== 00:19:26.719 [2024-11-19T10:30:40.500Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2286217 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2286185 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2286185 ']' 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2286185 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.719 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286185 00:19:26.979 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.979 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.979 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286185' 00:19:26.979 killing process with pid 2286185 00:19:26.979 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2286185 00:19:26.979 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2286185 00:19:26.979 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:26.979 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.979 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.979 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:26.979 "subsystems": [ 00:19:26.979 { 00:19:26.979 "subsystem": "keyring", 00:19:26.979 "config": [ 00:19:26.979 { 00:19:26.979 "method": "keyring_file_add_key", 00:19:26.979 "params": { 00:19:26.979 "name": "key0", 00:19:26.979 "path": "/tmp/tmp.66Q0kCvEFX" 00:19:26.979 } 00:19:26.979 } 00:19:26.979 ] 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "subsystem": "iobuf", 00:19:26.979 "config": [ 00:19:26.979 { 00:19:26.979 "method": "iobuf_set_options", 00:19:26.979 "params": { 00:19:26.979 "small_pool_count": 8192, 00:19:26.979 "large_pool_count": 1024, 00:19:26.979 "small_bufsize": 8192, 00:19:26.979 "large_bufsize": 135168, 00:19:26.979 "enable_numa": false 00:19:26.979 } 00:19:26.979 } 00:19:26.979 ] 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "subsystem": "sock", 00:19:26.979 "config": [ 00:19:26.979 { 00:19:26.979 "method": "sock_set_default_impl", 00:19:26.979 "params": { 00:19:26.979 "impl_name": "posix" 00:19:26.979 } 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "method": "sock_impl_set_options", 00:19:26.979 "params": { 00:19:26.979 "impl_name": "ssl", 00:19:26.979 "recv_buf_size": 4096, 00:19:26.979 "send_buf_size": 4096, 00:19:26.979 "enable_recv_pipe": true, 00:19:26.979 "enable_quickack": false, 00:19:26.979 "enable_placement_id": 0, 00:19:26.979 "enable_zerocopy_send_server": true, 00:19:26.979 "enable_zerocopy_send_client": false, 00:19:26.979 "zerocopy_threshold": 0, 00:19:26.979 "tls_version": 0, 00:19:26.979 "enable_ktls": false 00:19:26.979 } 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "method": "sock_impl_set_options", 00:19:26.979 "params": { 00:19:26.979 "impl_name": "posix", 00:19:26.979 "recv_buf_size": 2097152, 00:19:26.979 "send_buf_size": 2097152, 00:19:26.979 "enable_recv_pipe": true, 00:19:26.979 "enable_quickack": false, 00:19:26.979 "enable_placement_id": 0, 00:19:26.979 "enable_zerocopy_send_server": true, 00:19:26.979 "enable_zerocopy_send_client": false, 00:19:26.979 "zerocopy_threshold": 0, 00:19:26.979 "tls_version": 0, 00:19:26.979 "enable_ktls": false 00:19:26.979 } 00:19:26.979 } 00:19:26.979 ] 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "subsystem": "vmd", 00:19:26.979 "config": [] 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "subsystem": "accel", 00:19:26.979 "config": [ 00:19:26.979 { 00:19:26.979 "method": "accel_set_options", 00:19:26.979 "params": { 00:19:26.979 "small_cache_size": 128, 00:19:26.979 "large_cache_size": 16, 00:19:26.979 "task_count": 2048, 00:19:26.979 "sequence_count": 2048, 00:19:26.979 "buf_count": 2048 00:19:26.979 } 00:19:26.979 } 00:19:26.979 ] 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "subsystem": "bdev", 00:19:26.979 "config": [ 00:19:26.979 { 00:19:26.979 "method": "bdev_set_options", 00:19:26.979 "params": { 00:19:26.979 "bdev_io_pool_size": 65535, 00:19:26.979 "bdev_io_cache_size": 256, 00:19:26.979 "bdev_auto_examine": true, 00:19:26.979 "iobuf_small_cache_size": 128, 00:19:26.979 "iobuf_large_cache_size": 16 00:19:26.979 } 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "method": "bdev_raid_set_options", 00:19:26.979 "params": { 00:19:26.979 "process_window_size_kb": 1024, 00:19:26.979 "process_max_bandwidth_mb_sec": 0 00:19:26.979 } 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "method": "bdev_iscsi_set_options", 00:19:26.979 "params": { 00:19:26.979 "timeout_sec": 30 00:19:26.979 } 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "method": "bdev_nvme_set_options", 00:19:26.979 "params": { 00:19:26.979 "action_on_timeout": "none", 00:19:26.979 "timeout_us": 0, 00:19:26.979 "timeout_admin_us": 0, 00:19:26.979 "keep_alive_timeout_ms": 10000, 00:19:26.979 "arbitration_burst": 0, 00:19:26.979 "low_priority_weight": 0, 00:19:26.979 "medium_priority_weight": 0, 00:19:26.979 "high_priority_weight": 0, 00:19:26.979 "nvme_adminq_poll_period_us": 10000, 00:19:26.979 "nvme_ioq_poll_period_us": 0, 00:19:26.979 "io_queue_requests": 0, 00:19:26.979 "delay_cmd_submit": true, 00:19:26.979 "transport_retry_count": 4, 00:19:26.979 "bdev_retry_count": 3, 00:19:26.979 "transport_ack_timeout": 0, 00:19:26.979 "ctrlr_loss_timeout_sec": 0, 00:19:26.979 "reconnect_delay_sec": 0, 00:19:26.979 "fast_io_fail_timeout_sec": 0, 00:19:26.979 "disable_auto_failback": false, 00:19:26.979 "generate_uuids": false, 00:19:26.979 "transport_tos": 0, 00:19:26.979 "nvme_error_stat": false, 00:19:26.979 "rdma_srq_size": 0, 00:19:26.979 "io_path_stat": false, 00:19:26.979 "allow_accel_sequence": false, 00:19:26.979 "rdma_max_cq_size": 0, 00:19:26.979 "rdma_cm_event_timeout_ms": 0, 00:19:26.979 "dhchap_digests": [ 00:19:26.979 "sha256", 00:19:26.979 "sha384", 00:19:26.979 "sha512" 00:19:26.979 ], 00:19:26.979 "dhchap_dhgroups": [ 00:19:26.979 "null", 00:19:26.979 "ffdhe2048", 00:19:26.979 "ffdhe3072", 00:19:26.979 "ffdhe4096", 00:19:26.979 "ffdhe6144", 00:19:26.979 "ffdhe8192" 00:19:26.979 ] 00:19:26.979 } 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "method": "bdev_nvme_set_hotplug", 00:19:26.979 "params": { 00:19:26.979 "period_us": 100000, 00:19:26.979 "enable": false 00:19:26.979 } 00:19:26.979 }, 00:19:26.979 { 00:19:26.979 "method": "bdev_malloc_create", 00:19:26.979 "params": { 00:19:26.979 "name": "malloc0", 00:19:26.980 "num_blocks": 8192, 00:19:26.980 "block_size": 4096, 00:19:26.980 "physical_block_size": 4096, 00:19:26.980 "uuid": "353bd755-4428-4129-9a2c-9f7ed23c63fe", 00:19:26.980 "optimal_io_boundary": 0, 00:19:26.980 "md_size": 0, 00:19:26.980 "dif_type": 0, 00:19:26.980 "dif_is_head_of_md": false, 00:19:26.980 "dif_pi_format": 0 00:19:26.980 } 00:19:26.980 }, 00:19:26.980 { 00:19:26.980 "method": "bdev_wait_for_examine" 00:19:26.980 } 00:19:26.980 ] 00:19:26.980 }, 00:19:26.980 { 00:19:26.980 "subsystem": "nbd", 00:19:26.980 "config": [] 00:19:26.980 }, 00:19:26.980 { 00:19:26.980 "subsystem": "scheduler", 00:19:26.980 "config": [ 00:19:26.980 { 00:19:26.980 "method": "framework_set_scheduler", 00:19:26.980 "params": { 00:19:26.980 "name": "static" 00:19:26.980 } 00:19:26.980 } 00:19:26.980 ] 00:19:26.980 }, 00:19:26.980 { 00:19:26.980 "subsystem": "nvmf", 00:19:26.980 "config": [ 00:19:26.980 { 00:19:26.980 "method": "nvmf_set_config", 00:19:26.980 "params": { 00:19:26.980 "discovery_filter": "match_any", 00:19:26.980 "admin_cmd_passthru": { 00:19:26.980 "identify_ctrlr": false 00:19:26.980 }, 00:19:26.980 "dhchap_digests": [ 00:19:26.980 "sha256", 00:19:26.980 "sha384", 00:19:26.980 "sha512" 00:19:26.980 ], 00:19:26.980 "dhchap_dhgroups": [ 00:19:26.980 "null", 00:19:26.980 "ffdhe2048", 00:19:26.980 "ffdhe3072", 00:19:26.980 "ffdhe4096", 00:19:26.980 "ffdhe6144", 00:19:26.980 "ffdhe8192" 00:19:26.980 ] 00:19:26.980 } 00:19:26.980 }, 00:19:26.980 { 00:19:26.980 "method": "nvmf_set_max_subsystems", 00:19:26.980 "params": { 00:19:26.980 "max_subsystems": 1024 00:19:26.980 } 00:19:26.980 }, 00:19:26.980 { 00:19:26.980 "method": "nvmf_set_crdt", 00:19:26.980 "params": { 00:19:26.980 "crdt1": 0, 00:19:26.980 "crdt2": 0, 00:19:26.980 "crdt3": 0 00:19:26.980 } 00:19:26.980 }, 00:19:26.980 { 00:19:26.980 "method": "nvmf_create_transport", 00:19:26.980 "params": { 00:19:26.980 "trtype": "TCP", 00:19:26.980 "max_queue_depth": 128, 00:19:26.980 "max_io_qpairs_per_ctrlr": 127, 00:19:26.980 "in_capsule_data_size": 4096, 00:19:26.980 "max_io_size": 131072, 00:19:26.980 "io_unit_size": 131072, 00:19:26.980 "max_aq_depth": 128, 00:19:26.980 "num_shared_buffers": 511, 00:19:26.980 "buf_cache_size": 4294967295, 00:19:26.980 "dif_insert_or_strip": false, 00:19:26.980 "zcopy": false, 00:19:26.980 "c2h_success": false, 00:19:26.980 "sock_priority": 0, 00:19:26.980 "abort_timeout_sec": 1, 00:19:26.980 "ack_timeout": 0, 00:19:26.980 "data_wr_pool_size": 0 00:19:26.980 } 00:19:26.980 }, 00:19:26.980 { 00:19:26.980 "method": "nvmf_create_subsystem", 00:19:26.980 "params": { 00:19:26.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.980 "allow_any_host": false, 00:19:26.980 "serial_number": "00000000000000000000", 00:19:26.980 "model_number": "SPDK bdev Controller", 00:19:26.980 "max_namespaces": 32, 00:19:26.980 "min_cntlid": 1, 00:19:26.980 "max_cntlid": 65519, 00:19:26.980 "ana_reporting": false 00:19:26.980 } 00:19:26.980 }, 00:19:26.980 { 00:19:26.980 "method": "nvmf_subsystem_add_host", 00:19:26.980 "params": { 00:19:26.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.980 "host": "nqn.2016-06.io.spdk:host1", 00:19:26.980 "psk": "key0" 00:19:26.980 } 00:19:26.980 }, 00:19:26.980 { 00:19:26.980 "method": "nvmf_subsystem_add_ns", 00:19:26.980 "params": { 00:19:26.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.980 "namespace": { 00:19:26.980 "nsid": 1, 00:19:26.980 "bdev_name": "malloc0", 00:19:26.980 "nguid": "353BD755442841299A2C9F7ED23C63FE", 00:19:26.980 "uuid": "353bd755-4428-4129-9a2c-9f7ed23c63fe", 00:19:26.980 "no_auto_visible": false 00:19:26.980 } 00:19:26.980 } 00:19:26.980 }, 00:19:26.980 { 00:19:26.980 "method": "nvmf_subsystem_add_listener", 00:19:26.980 "params": { 00:19:26.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.980 "listen_address": { 00:19:26.980 "trtype": "TCP", 00:19:26.980 "adrfam": "IPv4", 00:19:26.980 "traddr": "10.0.0.2", 00:19:26.980 "trsvcid": "4420" 00:19:26.980 }, 00:19:26.980 "secure_channel": false, 00:19:26.980 "sock_impl": "ssl" 00:19:26.980 } 00:19:26.980 } 00:19:26.980 ] 00:19:26.980 } 00:19:26.980 ] 00:19:26.980 }' 00:19:26.980 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.980 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2286685 00:19:26.980 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:26.980 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2286685 00:19:26.980 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2286685 ']' 00:19:26.980 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.980 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.980 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.980 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.980 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.980 [2024-11-19 11:30:40.742850] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:26.980 [2024-11-19 11:30:40.742895] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.239 [2024-11-19 11:30:40.821506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.239 [2024-11-19 11:30:40.862532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.239 [2024-11-19 11:30:40.862571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.239 [2024-11-19 11:30:40.862578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.239 [2024-11-19 11:30:40.862585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.239 [2024-11-19 11:30:40.862590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.239 [2024-11-19 11:30:40.863233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.499 [2024-11-19 11:30:41.076233] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.499 [2024-11-19 11:30:41.108270] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.499 [2024-11-19 11:30:41.108475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2286927 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2286927 /var/tmp/bdevperf.sock 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2286927 ']' 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.068 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:28.068 "subsystems": [ 00:19:28.068 { 00:19:28.068 "subsystem": "keyring", 00:19:28.068 "config": [ 00:19:28.068 { 00:19:28.068 "method": "keyring_file_add_key", 00:19:28.068 "params": { 00:19:28.068 "name": "key0", 00:19:28.068 "path": "/tmp/tmp.66Q0kCvEFX" 00:19:28.068 } 00:19:28.068 } 00:19:28.068 ] 00:19:28.068 }, 00:19:28.068 { 00:19:28.068 "subsystem": "iobuf", 00:19:28.068 "config": [ 00:19:28.068 { 00:19:28.068 "method": "iobuf_set_options", 00:19:28.068 "params": { 00:19:28.068 "small_pool_count": 8192, 00:19:28.068 "large_pool_count": 1024, 00:19:28.068 "small_bufsize": 8192, 00:19:28.068 "large_bufsize": 135168, 00:19:28.068 "enable_numa": false 00:19:28.068 } 00:19:28.068 } 00:19:28.068 ] 00:19:28.068 }, 00:19:28.068 { 00:19:28.068 "subsystem": "sock", 00:19:28.068 "config": [ 00:19:28.068 { 00:19:28.068 "method": "sock_set_default_impl", 00:19:28.068 "params": { 00:19:28.068 "impl_name": "posix" 00:19:28.068 } 00:19:28.068 }, 00:19:28.068 { 00:19:28.068 "method": "sock_impl_set_options", 00:19:28.068 "params": { 00:19:28.068 "impl_name": "ssl", 00:19:28.068 "recv_buf_size": 4096, 00:19:28.068 "send_buf_size": 4096, 00:19:28.068 "enable_recv_pipe": true, 00:19:28.068 "enable_quickack": false, 00:19:28.068 "enable_placement_id": 0, 00:19:28.068 "enable_zerocopy_send_server": true, 00:19:28.068 "enable_zerocopy_send_client": false, 00:19:28.068 "zerocopy_threshold": 0, 00:19:28.068 "tls_version": 0, 00:19:28.068 "enable_ktls": false 00:19:28.068 } 00:19:28.068 }, 00:19:28.068 { 00:19:28.068 "method": "sock_impl_set_options", 00:19:28.068 "params": { 00:19:28.068 "impl_name": "posix", 00:19:28.068 "recv_buf_size": 2097152, 00:19:28.068 "send_buf_size": 2097152, 00:19:28.068 "enable_recv_pipe": true, 00:19:28.068 "enable_quickack": false, 00:19:28.068 "enable_placement_id": 0, 00:19:28.068 "enable_zerocopy_send_server": true, 00:19:28.068 "enable_zerocopy_send_client": false, 00:19:28.068 "zerocopy_threshold": 0, 00:19:28.068 "tls_version": 0, 00:19:28.068 "enable_ktls": false 00:19:28.068 } 00:19:28.068 } 00:19:28.068 ] 00:19:28.068 }, 00:19:28.068 { 00:19:28.068 "subsystem": "vmd", 00:19:28.068 "config": [] 00:19:28.068 }, 00:19:28.068 { 00:19:28.068 "subsystem": "accel", 00:19:28.068 "config": [ 00:19:28.068 { 00:19:28.068 "method": "accel_set_options", 00:19:28.068 "params": { 00:19:28.068 "small_cache_size": 128, 00:19:28.068 "large_cache_size": 16, 00:19:28.068 "task_count": 2048, 00:19:28.068 "sequence_count": 2048, 00:19:28.068 "buf_count": 2048 00:19:28.068 } 00:19:28.068 } 00:19:28.068 ] 00:19:28.068 }, 00:19:28.068 { 00:19:28.068 "subsystem": "bdev", 00:19:28.068 "config": [ 00:19:28.068 { 00:19:28.068 "method": "bdev_set_options", 00:19:28.068 "params": { 00:19:28.068 "bdev_io_pool_size": 65535, 00:19:28.068 "bdev_io_cache_size": 256, 00:19:28.068 "bdev_auto_examine": true, 00:19:28.068 "iobuf_small_cache_size": 128, 00:19:28.068 "iobuf_large_cache_size": 16 00:19:28.068 } 00:19:28.068 }, 00:19:28.068 { 00:19:28.068 "method": "bdev_raid_set_options", 00:19:28.068 "params": { 00:19:28.068 "process_window_size_kb": 1024, 00:19:28.068 "process_max_bandwidth_mb_sec": 0 00:19:28.068 } 00:19:28.068 }, 00:19:28.068 { 00:19:28.068 "method": "bdev_iscsi_set_options", 00:19:28.068 "params": { 00:19:28.068 "timeout_sec": 30 00:19:28.068 } 00:19:28.068 }, 00:19:28.068 { 00:19:28.068 "method": "bdev_nvme_set_options", 00:19:28.068 "params": { 00:19:28.068 "action_on_timeout": "none", 00:19:28.068 "timeout_us": 0, 00:19:28.068 "timeout_admin_us": 0, 00:19:28.068 "keep_alive_timeout_ms": 10000, 00:19:28.068 "arbitration_burst": 0, 00:19:28.068 "low_priority_weight": 0, 00:19:28.068 "medium_priority_weight": 0, 00:19:28.068 "high_priority_weight": 0, 00:19:28.068 "nvme_adminq_poll_period_us": 10000, 00:19:28.068 "nvme_ioq_poll_period_us": 0, 00:19:28.069 "io_queue_requests": 512, 00:19:28.069 "delay_cmd_submit": true, 00:19:28.069 "transport_retry_count": 4, 00:19:28.069 "bdev_retry_count": 3, 00:19:28.069 "transport_ack_timeout": 0, 00:19:28.069 "ctrlr_loss_timeout_sec": 0, 00:19:28.069 "reconnect_delay_sec": 0, 00:19:28.069 "fast_io_fail_timeout_sec": 0, 00:19:28.069 "disable_auto_failback": false, 00:19:28.069 "generate_uuids": false, 00:19:28.069 "transport_tos": 0, 00:19:28.069 "nvme_error_stat": false, 00:19:28.069 "rdma_srq_size": 0, 00:19:28.069 "io_path_stat": false, 00:19:28.069 "allow_accel_sequence": false, 00:19:28.069 "rdma_max_cq_size": 0, 00:19:28.069 "rdma_cm_event_timeout_ms": 0, 00:19:28.069 "dhchap_digests": [ 00:19:28.069 "sha256", 00:19:28.069 "sha384", 00:19:28.069 "sha512" 00:19:28.069 ], 00:19:28.069 "dhchap_dhgroups": [ 00:19:28.069 "null", 00:19:28.069 "ffdhe2048", 00:19:28.069 "ffdhe3072", 00:19:28.069 "ffdhe4096", 00:19:28.069 "ffdhe6144", 00:19:28.069 "ffdhe8192" 00:19:28.069 ] 00:19:28.069 } 00:19:28.069 }, 00:19:28.069 { 00:19:28.069 "method": "bdev_nvme_attach_controller", 00:19:28.069 "params": { 00:19:28.069 "name": "nvme0", 00:19:28.069 "trtype": "TCP", 00:19:28.069 "adrfam": "IPv4", 00:19:28.069 "traddr": "10.0.0.2", 00:19:28.069 "trsvcid": "4420", 00:19:28.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.069 "prchk_reftag": false, 00:19:28.069 "prchk_guard": false, 00:19:28.069 "ctrlr_loss_timeout_sec": 0, 00:19:28.069 "reconnect_delay_sec": 0, 00:19:28.069 "fast_io_fail_timeout_sec": 0, 00:19:28.069 "psk": "key0", 00:19:28.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.069 "hdgst": false, 00:19:28.069 "ddgst": false, 00:19:28.069 "multipath": "multipath" 00:19:28.069 } 00:19:28.069 }, 00:19:28.069 { 00:19:28.069 "method": "bdev_nvme_set_hotplug", 00:19:28.069 "params": { 00:19:28.069 "period_us": 100000, 00:19:28.069 "enable": false 00:19:28.069 } 00:19:28.069 }, 00:19:28.069 { 00:19:28.069 "method": "bdev_enable_histogram", 00:19:28.069 "params": { 00:19:28.069 "name": "nvme0n1", 00:19:28.069 "enable": true 00:19:28.069 } 00:19:28.069 }, 00:19:28.069 { 00:19:28.069 "method": "bdev_wait_for_examine" 00:19:28.069 } 00:19:28.069 ] 00:19:28.069 }, 00:19:28.069 { 00:19:28.069 "subsystem": "nbd", 00:19:28.069 "config": [] 00:19:28.069 } 00:19:28.069 ] 00:19:28.069 }' 00:19:28.069 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.069 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.069 [2024-11-19 11:30:41.668232] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:28.069 [2024-11-19 11:30:41.668281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286927 ] 00:19:28.069 [2024-11-19 11:30:41.742559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.069 [2024-11-19 11:30:41.785383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.327 [2024-11-19 11:30:41.939388] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.895 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.895 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.895 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.895 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:29.154 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.154 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:29.154 Running I/O for 1 seconds... 00:19:30.091 5312.00 IOPS, 20.75 MiB/s 00:19:30.091 Latency(us) 00:19:30.091 [2024-11-19T10:30:43.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.091 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:30.091 Verification LBA range: start 0x0 length 0x2000 00:19:30.091 nvme0n1 : 1.01 5364.14 20.95 0.00 0.00 23699.29 6012.22 25758.50 00:19:30.091 [2024-11-19T10:30:43.872Z] =================================================================================================================== 00:19:30.091 [2024-11-19T10:30:43.872Z] Total : 5364.14 20.95 0.00 0.00 23699.29 6012.22 25758.50 00:19:30.091 { 00:19:30.091 "results": [ 00:19:30.091 { 00:19:30.091 "job": "nvme0n1", 00:19:30.091 "core_mask": "0x2", 00:19:30.091 "workload": "verify", 00:19:30.091 "status": "finished", 00:19:30.091 "verify_range": { 00:19:30.091 "start": 0, 00:19:30.091 "length": 8192 00:19:30.091 }, 00:19:30.091 "queue_depth": 128, 00:19:30.091 "io_size": 4096, 00:19:30.091 "runtime": 1.014143, 00:19:30.091 "iops": 5364.135038155368, 00:19:30.091 "mibps": 20.953652492794408, 00:19:30.091 "io_failed": 0, 00:19:30.091 "io_timeout": 0, 00:19:30.091 "avg_latency_us": 23699.2883887468, 00:19:30.091 "min_latency_us": 6012.215652173913, 00:19:30.091 "max_latency_us": 25758.497391304347 00:19:30.091 } 00:19:30.091 ], 00:19:30.091 "core_count": 1 00:19:30.091 } 00:19:30.091 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:30.091 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:30.091 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:30.091 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:30.091 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:30.091 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:30.091 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:30.091 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:30.091 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:30.091 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:30.091 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:30.091 nvmf_trace.0 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2286927 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2286927 ']' 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2286927 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286927 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286927' 00:19:30.351 killing process with pid 2286927 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2286927 00:19:30.351 Received shutdown signal, test time was about 1.000000 seconds 00:19:30.351 00:19:30.351 Latency(us) 00:19:30.351 [2024-11-19T10:30:44.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.351 [2024-11-19T10:30:44.132Z] =================================================================================================================== 00:19:30.351 [2024-11-19T10:30:44.132Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.351 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2286927 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.616 rmmod nvme_tcp 00:19:30.616 rmmod nvme_fabrics 00:19:30.616 rmmod nvme_keyring 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2286685 ']' 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2286685 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2286685 ']' 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2286685 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286685 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.616 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286685' 00:19:30.616 killing process with pid 2286685 00:19:30.617 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2286685 00:19:30.617 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2286685 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.876 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.786 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:32.786 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.RRmUhifBqp /tmp/tmp.GBKbVaHGmB /tmp/tmp.66Q0kCvEFX 00:19:32.786 00:19:32.786 real 1m20.191s 00:19:32.786 user 2m3.247s 00:19:32.786 sys 0m30.065s 00:19:32.786 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.786 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.786 ************************************ 00:19:32.786 END TEST nvmf_tls 00:19:32.786 ************************************ 00:19:32.786 11:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:32.786 11:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:32.786 11:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.786 11:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.046 ************************************ 00:19:33.046 START TEST nvmf_fips 00:19:33.046 ************************************ 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:33.046 * Looking for test storage... 00:19:33.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:33.046 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:33.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.047 --rc genhtml_branch_coverage=1 00:19:33.047 --rc genhtml_function_coverage=1 00:19:33.047 --rc genhtml_legend=1 00:19:33.047 --rc geninfo_all_blocks=1 00:19:33.047 --rc geninfo_unexecuted_blocks=1 00:19:33.047 00:19:33.047 ' 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:33.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.047 --rc genhtml_branch_coverage=1 00:19:33.047 --rc genhtml_function_coverage=1 00:19:33.047 --rc genhtml_legend=1 00:19:33.047 --rc geninfo_all_blocks=1 00:19:33.047 --rc geninfo_unexecuted_blocks=1 00:19:33.047 00:19:33.047 ' 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:33.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.047 --rc genhtml_branch_coverage=1 00:19:33.047 --rc genhtml_function_coverage=1 00:19:33.047 --rc genhtml_legend=1 00:19:33.047 --rc geninfo_all_blocks=1 00:19:33.047 --rc geninfo_unexecuted_blocks=1 00:19:33.047 00:19:33.047 ' 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:33.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.047 --rc genhtml_branch_coverage=1 00:19:33.047 --rc genhtml_function_coverage=1 00:19:33.047 --rc genhtml_legend=1 00:19:33.047 --rc geninfo_all_blocks=1 00:19:33.047 --rc geninfo_unexecuted_blocks=1 00:19:33.047 00:19:33.047 ' 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.047 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:33.048 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:33.308 Error setting digest 00:19:33.308 4052C33E857F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:33.308 4052C33E857F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:33.308 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:39.879 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:39.879 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:39.879 Found net devices under 0000:86:00.0: cvl_0_0 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:39.879 Found net devices under 0000:86:00.1: cvl_0_1 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.879 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:39.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:19:39.880 00:19:39.880 --- 10.0.0.2 ping statistics --- 00:19:39.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.880 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:19:39.880 00:19:39.880 --- 10.0.0.1 ping statistics --- 00:19:39.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.880 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2290901 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2290901 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2290901 ']' 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.880 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:39.880 [2024-11-19 11:30:52.942795] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:39.880 [2024-11-19 11:30:52.942842] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.880 [2024-11-19 11:30:53.023516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.880 [2024-11-19 11:30:53.065234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.880 [2024-11-19 11:30:53.065269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.880 [2024-11-19 11:30:53.065275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.880 [2024-11-19 11:30:53.065281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.880 [2024-11-19 11:30:53.065287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.880 [2024-11-19 11:30:53.065875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.O2N 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.O2N 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.O2N 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.O2N 00:19:40.139 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:40.398 [2024-11-19 11:30:53.982273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.398 [2024-11-19 11:30:53.998274] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:40.398 [2024-11-19 11:30:53.998434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.398 malloc0 00:19:40.398 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.398 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2291028 00:19:40.398 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.398 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2291028 /var/tmp/bdevperf.sock 00:19:40.398 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2291028 ']' 00:19:40.398 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.398 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.398 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.398 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.398 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:40.398 [2024-11-19 11:30:54.130882] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:40.398 [2024-11-19 11:30:54.130933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291028 ] 00:19:40.657 [2024-11-19 11:30:54.208153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.657 [2024-11-19 11:30:54.249049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.224 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.224 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:41.224 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.O2N 00:19:41.482 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:41.741 [2024-11-19 11:30:55.321970] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.741 TLSTESTn1 00:19:41.741 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:41.741 Running I/O for 10 seconds... 00:19:43.768 5223.00 IOPS, 20.40 MiB/s [2024-11-19T10:30:58.925Z] 5452.50 IOPS, 21.30 MiB/s [2024-11-19T10:30:59.860Z] 5405.00 IOPS, 21.11 MiB/s [2024-11-19T10:31:00.795Z] 5471.75 IOPS, 21.37 MiB/s [2024-11-19T10:31:01.729Z] 5411.60 IOPS, 21.14 MiB/s [2024-11-19T10:31:02.663Z] 5436.17 IOPS, 21.24 MiB/s [2024-11-19T10:31:03.599Z] 5429.14 IOPS, 21.21 MiB/s [2024-11-19T10:31:04.976Z] 5451.00 IOPS, 21.29 MiB/s [2024-11-19T10:31:05.542Z] 5457.11 IOPS, 21.32 MiB/s [2024-11-19T10:31:05.802Z] 5442.70 IOPS, 21.26 MiB/s 00:19:52.021 Latency(us) 00:19:52.021 [2024-11-19T10:31:05.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.021 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:52.021 Verification LBA range: start 0x0 length 0x2000 00:19:52.021 TLSTESTn1 : 10.02 5444.08 21.27 0.00 0.00 23475.01 6439.62 24504.77 00:19:52.021 [2024-11-19T10:31:05.802Z] =================================================================================================================== 00:19:52.021 [2024-11-19T10:31:05.802Z] Total : 5444.08 21.27 0.00 0.00 23475.01 6439.62 24504.77 00:19:52.021 { 00:19:52.021 "results": [ 00:19:52.021 { 00:19:52.021 "job": "TLSTESTn1", 00:19:52.021 "core_mask": "0x4", 00:19:52.021 "workload": "verify", 00:19:52.021 "status": "finished", 00:19:52.021 "verify_range": { 00:19:52.021 "start": 0, 00:19:52.021 "length": 8192 00:19:52.021 }, 00:19:52.021 "queue_depth": 128, 00:19:52.021 "io_size": 4096, 00:19:52.021 "runtime": 10.020795, 00:19:52.021 "iops": 5444.079037641225, 00:19:52.021 "mibps": 21.265933740786036, 00:19:52.021 "io_failed": 0, 00:19:52.021 "io_timeout": 0, 00:19:52.021 "avg_latency_us": 23475.012989762043, 00:19:52.021 "min_latency_us": 6439.624347826087, 00:19:52.021 "max_latency_us": 24504.765217391305 00:19:52.021 } 00:19:52.021 ], 00:19:52.021 "core_count": 1 00:19:52.021 } 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:52.021 nvmf_trace.0 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2291028 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2291028 ']' 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2291028 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2291028 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2291028' 00:19:52.021 killing process with pid 2291028 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2291028 00:19:52.021 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.021 00:19:52.021 Latency(us) 00:19:52.021 [2024-11-19T10:31:05.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.021 [2024-11-19T10:31:05.802Z] =================================================================================================================== 00:19:52.021 [2024-11-19T10:31:05.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.021 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2291028 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:52.281 rmmod nvme_tcp 00:19:52.281 rmmod nvme_fabrics 00:19:52.281 rmmod nvme_keyring 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2290901 ']' 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2290901 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2290901 ']' 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2290901 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.281 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2290901 00:19:52.281 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:52.281 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:52.281 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2290901' 00:19:52.281 killing process with pid 2290901 00:19:52.281 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2290901 00:19:52.281 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2290901 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.541 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.O2N 00:19:55.080 00:19:55.080 real 0m21.677s 00:19:55.080 user 0m23.520s 00:19:55.080 sys 0m9.612s 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:55.080 ************************************ 00:19:55.080 END TEST nvmf_fips 00:19:55.080 ************************************ 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:55.080 ************************************ 00:19:55.080 START TEST nvmf_control_msg_list 00:19:55.080 ************************************ 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:55.080 * Looking for test storage... 00:19:55.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:55.080 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:55.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.081 --rc genhtml_branch_coverage=1 00:19:55.081 --rc genhtml_function_coverage=1 00:19:55.081 --rc genhtml_legend=1 00:19:55.081 --rc geninfo_all_blocks=1 00:19:55.081 --rc geninfo_unexecuted_blocks=1 00:19:55.081 00:19:55.081 ' 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:55.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.081 --rc genhtml_branch_coverage=1 00:19:55.081 --rc genhtml_function_coverage=1 00:19:55.081 --rc genhtml_legend=1 00:19:55.081 --rc geninfo_all_blocks=1 00:19:55.081 --rc geninfo_unexecuted_blocks=1 00:19:55.081 00:19:55.081 ' 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:55.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.081 --rc genhtml_branch_coverage=1 00:19:55.081 --rc genhtml_function_coverage=1 00:19:55.081 --rc genhtml_legend=1 00:19:55.081 --rc geninfo_all_blocks=1 00:19:55.081 --rc geninfo_unexecuted_blocks=1 00:19:55.081 00:19:55.081 ' 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:55.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.081 --rc genhtml_branch_coverage=1 00:19:55.081 --rc genhtml_function_coverage=1 00:19:55.081 --rc genhtml_legend=1 00:19:55.081 --rc geninfo_all_blocks=1 00:19:55.081 --rc geninfo_unexecuted_blocks=1 00:19:55.081 00:19:55.081 ' 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:55.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:55.081 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:55.082 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.082 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:55.082 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.082 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:55.082 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:55.082 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:55.082 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:01.653 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.653 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:01.654 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:01.654 Found net devices under 0000:86:00.0: cvl_0_0 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:01.654 Found net devices under 0000:86:00.1: cvl_0_1 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:01.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:20:01.654 00:20:01.654 --- 10.0.0.2 ping statistics --- 00:20:01.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.654 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:20:01.654 00:20:01.654 --- 10.0.0.1 ping statistics --- 00:20:01.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.654 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2296573 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:01.654 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2296573 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2296573 ']' 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.655 [2024-11-19 11:31:14.506720] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:20:01.655 [2024-11-19 11:31:14.506766] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.655 [2024-11-19 11:31:14.585087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.655 [2024-11-19 11:31:14.628639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.655 [2024-11-19 11:31:14.628674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.655 [2024-11-19 11:31:14.628681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.655 [2024-11-19 11:31:14.628688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.655 [2024-11-19 11:31:14.628694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.655 [2024-11-19 11:31:14.629260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.655 [2024-11-19 11:31:14.769386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.655 Malloc0 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:01.655 [2024-11-19 11:31:14.809729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2296595 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2296596 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2296598 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:01.655 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2296595 00:20:01.655 [2024-11-19 11:31:14.888166] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:01.655 [2024-11-19 11:31:14.898240] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:01.655 [2024-11-19 11:31:14.898384] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:02.592 Initializing NVMe Controllers 00:20:02.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:02.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:02.592 Initialization complete. Launching workers. 00:20:02.592 ======================================================== 00:20:02.592 Latency(us) 00:20:02.592 Device Information : IOPS MiB/s Average min max 00:20:02.592 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 97.00 0.38 10715.35 236.15 41406.98 00:20:02.592 ======================================================== 00:20:02.592 Total : 97.00 0.38 10715.35 236.15 41406.98 00:20:02.592 00:20:02.592 Initializing NVMe Controllers 00:20:02.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:02.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:02.592 Initialization complete. Launching workers. 00:20:02.592 ======================================================== 00:20:02.592 Latency(us) 00:20:02.592 Device Information : IOPS MiB/s Average min max 00:20:02.592 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 7065.00 27.60 141.20 127.74 355.17 00:20:02.592 ======================================================== 00:20:02.592 Total : 7065.00 27.60 141.20 127.74 355.17 00:20:02.592 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2296596 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2296598 00:20:02.592 Initializing NVMe Controllers 00:20:02.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:02.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:02.592 Initialization complete. Launching workers. 00:20:02.592 ======================================================== 00:20:02.592 Latency(us) 00:20:02.592 Device Information : IOPS MiB/s Average min max 00:20:02.592 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 40.00 0.16 25653.75 240.32 41890.22 00:20:02.592 ======================================================== 00:20:02.592 Total : 40.00 0.16 25653.75 240.32 41890.22 00:20:02.592 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:02.592 rmmod nvme_tcp 00:20:02.592 rmmod nvme_fabrics 00:20:02.592 rmmod nvme_keyring 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2296573 ']' 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2296573 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2296573 ']' 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2296573 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2296573 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2296573' 00:20:02.592 killing process with pid 2296573 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2296573 00:20:02.592 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2296573 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.852 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.760 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:04.760 00:20:04.760 real 0m10.144s 00:20:04.760 user 0m6.769s 00:20:04.760 sys 0m5.417s 00:20:04.760 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.760 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:04.760 ************************************ 00:20:04.760 END TEST nvmf_control_msg_list 00:20:04.760 ************************************ 00:20:04.760 11:31:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:04.760 11:31:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:04.760 11:31:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.760 11:31:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:04.760 ************************************ 00:20:04.760 START TEST nvmf_wait_for_buf 00:20:04.760 ************************************ 00:20:04.760 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:05.021 * Looking for test storage... 00:20:05.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:05.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.021 --rc genhtml_branch_coverage=1 00:20:05.021 --rc genhtml_function_coverage=1 00:20:05.021 --rc genhtml_legend=1 00:20:05.021 --rc geninfo_all_blocks=1 00:20:05.021 --rc geninfo_unexecuted_blocks=1 00:20:05.021 00:20:05.021 ' 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:05.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.021 --rc genhtml_branch_coverage=1 00:20:05.021 --rc genhtml_function_coverage=1 00:20:05.021 --rc genhtml_legend=1 00:20:05.021 --rc geninfo_all_blocks=1 00:20:05.021 --rc geninfo_unexecuted_blocks=1 00:20:05.021 00:20:05.021 ' 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:05.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.021 --rc genhtml_branch_coverage=1 00:20:05.021 --rc genhtml_function_coverage=1 00:20:05.021 --rc genhtml_legend=1 00:20:05.021 --rc geninfo_all_blocks=1 00:20:05.021 --rc geninfo_unexecuted_blocks=1 00:20:05.021 00:20:05.021 ' 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:05.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.021 --rc genhtml_branch_coverage=1 00:20:05.021 --rc genhtml_function_coverage=1 00:20:05.021 --rc genhtml_legend=1 00:20:05.021 --rc geninfo_all_blocks=1 00:20:05.021 --rc geninfo_unexecuted_blocks=1 00:20:05.021 00:20:05.021 ' 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.021 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:05.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:05.022 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.593 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:11.594 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:11.594 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:11.594 Found net devices under 0000:86:00.0: cvl_0_0 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:11.594 Found net devices under 0000:86:00.1: cvl_0_1 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:11.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:20:11.594 00:20:11.594 --- 10.0.0.2 ping statistics --- 00:20:11.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.594 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:20:11.594 00:20:11.594 --- 10.0.0.1 ping statistics --- 00:20:11.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.594 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2300348 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2300348 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2300348 ']' 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.594 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 [2024-11-19 11:31:24.740809] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:20:11.595 [2024-11-19 11:31:24.740859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.595 [2024-11-19 11:31:24.822568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.595 [2024-11-19 11:31:24.863375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.595 [2024-11-19 11:31:24.863411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.595 [2024-11-19 11:31:24.863418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.595 [2024-11-19 11:31:24.863428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.595 [2024-11-19 11:31:24.863434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.595 [2024-11-19 11:31:24.864002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.595 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 Malloc0 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 [2024-11-19 11:31:25.033202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.595 [2024-11-19 11:31:25.061374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.595 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:11.595 [2024-11-19 11:31:25.149025] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:12.976 Initializing NVMe Controllers 00:20:12.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:12.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:12.976 Initialization complete. Launching workers. 00:20:12.976 ======================================================== 00:20:12.976 Latency(us) 00:20:12.976 Device Information : IOPS MiB/s Average min max 00:20:12.976 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32238.94 7292.71 63847.96 00:20:12.976 ======================================================== 00:20:12.976 Total : 129.00 16.12 32238.94 7292.71 63847.96 00:20:12.976 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:12.976 rmmod nvme_tcp 00:20:12.976 rmmod nvme_fabrics 00:20:12.976 rmmod nvme_keyring 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2300348 ']' 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2300348 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2300348 ']' 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2300348 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2300348 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2300348' 00:20:12.976 killing process with pid 2300348 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2300348 00:20:12.976 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2300348 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.247 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.166 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:15.426 00:20:15.426 real 0m10.414s 00:20:15.426 user 0m3.924s 00:20:15.426 sys 0m4.941s 00:20:15.426 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.426 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:15.426 ************************************ 00:20:15.426 END TEST nvmf_wait_for_buf 00:20:15.426 ************************************ 00:20:15.426 11:31:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:15.426 11:31:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:15.426 11:31:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:15.426 11:31:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:15.426 11:31:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:15.426 11:31:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.999 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:22.000 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:22.000 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:22.000 Found net devices under 0000:86:00.0: cvl_0_0 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:22.000 Found net devices under 0000:86:00.1: cvl_0_1 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:22.000 ************************************ 00:20:22.000 START TEST nvmf_perf_adq 00:20:22.000 ************************************ 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:22.000 * Looking for test storage... 00:20:22.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:22.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.000 --rc genhtml_branch_coverage=1 00:20:22.000 --rc genhtml_function_coverage=1 00:20:22.000 --rc genhtml_legend=1 00:20:22.000 --rc geninfo_all_blocks=1 00:20:22.000 --rc geninfo_unexecuted_blocks=1 00:20:22.000 00:20:22.000 ' 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:22.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.000 --rc genhtml_branch_coverage=1 00:20:22.000 --rc genhtml_function_coverage=1 00:20:22.000 --rc genhtml_legend=1 00:20:22.000 --rc geninfo_all_blocks=1 00:20:22.000 --rc geninfo_unexecuted_blocks=1 00:20:22.000 00:20:22.000 ' 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:22.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.000 --rc genhtml_branch_coverage=1 00:20:22.000 --rc genhtml_function_coverage=1 00:20:22.000 --rc genhtml_legend=1 00:20:22.000 --rc geninfo_all_blocks=1 00:20:22.000 --rc geninfo_unexecuted_blocks=1 00:20:22.000 00:20:22.000 ' 00:20:22.000 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:22.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.001 --rc genhtml_branch_coverage=1 00:20:22.001 --rc genhtml_function_coverage=1 00:20:22.001 --rc genhtml_legend=1 00:20:22.001 --rc geninfo_all_blocks=1 00:20:22.001 --rc geninfo_unexecuted_blocks=1 00:20:22.001 00:20:22.001 ' 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:22.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:22.001 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:27.280 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:27.280 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:27.280 Found net devices under 0000:86:00.0: cvl_0_0 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:27.280 Found net devices under 0000:86:00.1: cvl_0_1 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:27.280 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:27.281 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:27.281 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:27.281 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:27.281 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:28.219 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:30.128 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:35.406 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:35.407 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:35.407 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:35.407 Found net devices under 0000:86:00.0: cvl_0_0 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:35.407 Found net devices under 0000:86:00.1: cvl_0_1 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:35.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:20:35.407 00:20:35.407 --- 10.0.0.2 ping statistics --- 00:20:35.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.407 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:20:35.407 00:20:35.407 --- 10.0.0.1 ping statistics --- 00:20:35.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.407 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:35.407 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:35.408 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:35.408 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:35.408 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.408 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.408 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2308695 00:20:35.408 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2308695 00:20:35.408 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:35.408 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2308695 ']' 00:20:35.408 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.408 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.408 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.408 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.408 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.408 [2024-11-19 11:31:49.051185] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:20:35.408 [2024-11-19 11:31:49.051234] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.408 [2024-11-19 11:31:49.133523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:35.408 [2024-11-19 11:31:49.177433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.408 [2024-11-19 11:31:49.177470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.408 [2024-11-19 11:31:49.177477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.408 [2024-11-19 11:31:49.177483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.408 [2024-11-19 11:31:49.177488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.408 [2024-11-19 11:31:49.179083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.408 [2024-11-19 11:31:49.179190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.408 [2024-11-19 11:31:49.179296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.408 [2024-11-19 11:31:49.179297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.666 [2024-11-19 11:31:49.379704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.666 Malloc1 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.666 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.925 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.925 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.925 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.925 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.925 [2024-11-19 11:31:49.451009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.925 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.925 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2308724 00:20:35.925 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:35.925 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:37.834 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:37.834 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.834 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.834 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.834 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:37.834 "tick_rate": 2300000000, 00:20:37.834 "poll_groups": [ 00:20:37.834 { 00:20:37.834 "name": "nvmf_tgt_poll_group_000", 00:20:37.834 "admin_qpairs": 1, 00:20:37.834 "io_qpairs": 1, 00:20:37.834 "current_admin_qpairs": 1, 00:20:37.834 "current_io_qpairs": 1, 00:20:37.834 "pending_bdev_io": 0, 00:20:37.834 "completed_nvme_io": 19237, 00:20:37.834 "transports": [ 00:20:37.834 { 00:20:37.834 "trtype": "TCP" 00:20:37.834 } 00:20:37.834 ] 00:20:37.834 }, 00:20:37.834 { 00:20:37.834 "name": "nvmf_tgt_poll_group_001", 00:20:37.834 "admin_qpairs": 0, 00:20:37.834 "io_qpairs": 1, 00:20:37.834 "current_admin_qpairs": 0, 00:20:37.834 "current_io_qpairs": 1, 00:20:37.834 "pending_bdev_io": 0, 00:20:37.834 "completed_nvme_io": 19408, 00:20:37.834 "transports": [ 00:20:37.834 { 00:20:37.834 "trtype": "TCP" 00:20:37.834 } 00:20:37.834 ] 00:20:37.834 }, 00:20:37.834 { 00:20:37.834 "name": "nvmf_tgt_poll_group_002", 00:20:37.834 "admin_qpairs": 0, 00:20:37.834 "io_qpairs": 1, 00:20:37.834 "current_admin_qpairs": 0, 00:20:37.834 "current_io_qpairs": 1, 00:20:37.834 "pending_bdev_io": 0, 00:20:37.834 "completed_nvme_io": 19198, 00:20:37.834 "transports": [ 00:20:37.834 { 00:20:37.834 "trtype": "TCP" 00:20:37.834 } 00:20:37.834 ] 00:20:37.834 }, 00:20:37.834 { 00:20:37.834 "name": "nvmf_tgt_poll_group_003", 00:20:37.834 "admin_qpairs": 0, 00:20:37.834 "io_qpairs": 1, 00:20:37.834 "current_admin_qpairs": 0, 00:20:37.834 "current_io_qpairs": 1, 00:20:37.834 "pending_bdev_io": 0, 00:20:37.834 "completed_nvme_io": 19150, 00:20:37.834 "transports": [ 00:20:37.834 { 00:20:37.834 "trtype": "TCP" 00:20:37.834 } 00:20:37.834 ] 00:20:37.834 } 00:20:37.834 ] 00:20:37.834 }' 00:20:37.834 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:37.834 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:37.834 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:37.834 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:37.834 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2308724 00:20:45.964 Initializing NVMe Controllers 00:20:45.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:45.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:45.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:45.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:45.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:45.964 Initialization complete. Launching workers. 00:20:45.964 ======================================================== 00:20:45.964 Latency(us) 00:20:45.964 Device Information : IOPS MiB/s Average min max 00:20:45.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10108.90 39.49 6331.75 1689.40 10437.29 00:20:45.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10331.70 40.36 6193.86 1946.55 10403.61 00:20:45.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10199.80 39.84 6273.96 2207.64 12330.36 00:20:45.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10248.40 40.03 6244.17 2392.37 11011.80 00:20:45.964 ======================================================== 00:20:45.964 Total : 40888.80 159.72 6260.54 1689.40 12330.36 00:20:45.964 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:45.964 rmmod nvme_tcp 00:20:45.964 rmmod nvme_fabrics 00:20:45.964 rmmod nvme_keyring 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2308695 ']' 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2308695 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2308695 ']' 00:20:45.964 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2308695 00:20:45.965 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:45.965 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.965 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2308695 00:20:45.965 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:45.965 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:45.965 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2308695' 00:20:45.965 killing process with pid 2308695 00:20:45.965 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2308695 00:20:45.965 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2308695 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.225 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.765 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:48.765 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:48.765 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:48.765 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:49.700 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:51.604 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:56.884 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:56.885 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:56.885 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:56.885 Found net devices under 0000:86:00.0: cvl_0_0 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:56.885 Found net devices under 0000:86:00.1: cvl_0_1 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.885 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:56.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:20:56.886 00:20:56.886 --- 10.0.0.2 ping statistics --- 00:20:56.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.886 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:20:56.886 00:20:56.886 --- 10.0.0.1 ping statistics --- 00:20:56.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.886 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:56.886 net.core.busy_poll = 1 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:56.886 net.core.busy_read = 1 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2312528 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2312528 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2312528 ']' 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.886 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.146 [2024-11-19 11:32:10.676618] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:20:57.146 [2024-11-19 11:32:10.676661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.146 [2024-11-19 11:32:10.758992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:57.146 [2024-11-19 11:32:10.802195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.146 [2024-11-19 11:32:10.802236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.146 [2024-11-19 11:32:10.802243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.146 [2024-11-19 11:32:10.802249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.146 [2024-11-19 11:32:10.802254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.146 [2024-11-19 11:32:10.803818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.146 [2024-11-19 11:32:10.803845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.146 [2024-11-19 11:32:10.803998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.146 [2024-11-19 11:32:10.803999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.085 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.086 [2024-11-19 11:32:11.695598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.086 Malloc1 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.086 [2024-11-19 11:32:11.761793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2312776 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:58.086 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:00.625 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:00.625 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.625 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.625 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.625 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:00.625 "tick_rate": 2300000000, 00:21:00.625 "poll_groups": [ 00:21:00.625 { 00:21:00.625 "name": "nvmf_tgt_poll_group_000", 00:21:00.625 "admin_qpairs": 1, 00:21:00.625 "io_qpairs": 2, 00:21:00.625 "current_admin_qpairs": 1, 00:21:00.625 "current_io_qpairs": 2, 00:21:00.625 "pending_bdev_io": 0, 00:21:00.625 "completed_nvme_io": 28115, 00:21:00.625 "transports": [ 00:21:00.625 { 00:21:00.625 "trtype": "TCP" 00:21:00.625 } 00:21:00.625 ] 00:21:00.625 }, 00:21:00.625 { 00:21:00.625 "name": "nvmf_tgt_poll_group_001", 00:21:00.625 "admin_qpairs": 0, 00:21:00.625 "io_qpairs": 2, 00:21:00.625 "current_admin_qpairs": 0, 00:21:00.625 "current_io_qpairs": 2, 00:21:00.625 "pending_bdev_io": 0, 00:21:00.625 "completed_nvme_io": 28669, 00:21:00.625 "transports": [ 00:21:00.625 { 00:21:00.625 "trtype": "TCP" 00:21:00.625 } 00:21:00.625 ] 00:21:00.625 }, 00:21:00.625 { 00:21:00.625 "name": "nvmf_tgt_poll_group_002", 00:21:00.625 "admin_qpairs": 0, 00:21:00.625 "io_qpairs": 0, 00:21:00.625 "current_admin_qpairs": 0, 00:21:00.625 "current_io_qpairs": 0, 00:21:00.625 "pending_bdev_io": 0, 00:21:00.625 "completed_nvme_io": 0, 00:21:00.625 "transports": [ 00:21:00.625 { 00:21:00.625 "trtype": "TCP" 00:21:00.625 } 00:21:00.625 ] 00:21:00.625 }, 00:21:00.625 { 00:21:00.625 "name": "nvmf_tgt_poll_group_003", 00:21:00.625 "admin_qpairs": 0, 00:21:00.625 "io_qpairs": 0, 00:21:00.625 "current_admin_qpairs": 0, 00:21:00.625 "current_io_qpairs": 0, 00:21:00.625 "pending_bdev_io": 0, 00:21:00.625 "completed_nvme_io": 0, 00:21:00.625 "transports": [ 00:21:00.625 { 00:21:00.625 "trtype": "TCP" 00:21:00.625 } 00:21:00.625 ] 00:21:00.625 } 00:21:00.625 ] 00:21:00.625 }' 00:21:00.625 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:00.625 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:00.625 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:00.625 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:00.625 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2312776 00:21:08.752 Initializing NVMe Controllers 00:21:08.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:08.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:08.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:08.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:08.752 Initialization complete. Launching workers. 00:21:08.752 ======================================================== 00:21:08.752 Latency(us) 00:21:08.752 Device Information : IOPS MiB/s Average min max 00:21:08.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7679.09 30.00 8333.31 1360.46 53089.22 00:21:08.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7519.49 29.37 8511.91 1360.17 53275.85 00:21:08.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7576.69 29.60 8447.86 1505.74 53200.32 00:21:08.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7075.69 27.64 9078.39 1608.23 55387.05 00:21:08.752 ======================================================== 00:21:08.752 Total : 29850.96 116.61 8583.98 1360.17 55387.05 00:21:08.752 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.752 rmmod nvme_tcp 00:21:08.752 rmmod nvme_fabrics 00:21:08.752 rmmod nvme_keyring 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2312528 ']' 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2312528 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2312528 ']' 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2312528 00:21:08.752 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2312528 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2312528' 00:21:08.752 killing process with pid 2312528 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2312528 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2312528 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.752 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.753 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.753 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.753 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:12.051 00:21:12.051 real 0m50.640s 00:21:12.051 user 2m46.479s 00:21:12.051 sys 0m10.530s 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:12.051 ************************************ 00:21:12.051 END TEST nvmf_perf_adq 00:21:12.051 ************************************ 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:12.051 ************************************ 00:21:12.051 START TEST nvmf_shutdown 00:21:12.051 ************************************ 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:12.051 * Looking for test storage... 00:21:12.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.051 --rc genhtml_branch_coverage=1 00:21:12.051 --rc genhtml_function_coverage=1 00:21:12.051 --rc genhtml_legend=1 00:21:12.051 --rc geninfo_all_blocks=1 00:21:12.051 --rc geninfo_unexecuted_blocks=1 00:21:12.051 00:21:12.051 ' 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.051 --rc genhtml_branch_coverage=1 00:21:12.051 --rc genhtml_function_coverage=1 00:21:12.051 --rc genhtml_legend=1 00:21:12.051 --rc geninfo_all_blocks=1 00:21:12.051 --rc geninfo_unexecuted_blocks=1 00:21:12.051 00:21:12.051 ' 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.051 --rc genhtml_branch_coverage=1 00:21:12.051 --rc genhtml_function_coverage=1 00:21:12.051 --rc genhtml_legend=1 00:21:12.051 --rc geninfo_all_blocks=1 00:21:12.051 --rc geninfo_unexecuted_blocks=1 00:21:12.051 00:21:12.051 ' 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.051 --rc genhtml_branch_coverage=1 00:21:12.051 --rc genhtml_function_coverage=1 00:21:12.051 --rc genhtml_legend=1 00:21:12.051 --rc geninfo_all_blocks=1 00:21:12.051 --rc geninfo_unexecuted_blocks=1 00:21:12.051 00:21:12.051 ' 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.051 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:12.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:12.052 ************************************ 00:21:12.052 START TEST nvmf_shutdown_tc1 00:21:12.052 ************************************ 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:12.052 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.629 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:18.630 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:18.630 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:18.630 Found net devices under 0000:86:00.0: cvl_0_0 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:18.630 Found net devices under 0000:86:00.1: cvl_0_1 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:18.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:21:18.630 00:21:18.630 --- 10.0.0.2 ping statistics --- 00:21:18.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.630 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:21:18.630 00:21:18.630 --- 10.0.0.1 ping statistics --- 00:21:18.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.630 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2318223 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2318223 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:18.630 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2318223 ']' 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.631 [2024-11-19 11:32:31.692862] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:18.631 [2024-11-19 11:32:31.692906] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.631 [2024-11-19 11:32:31.773077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.631 [2024-11-19 11:32:31.815150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.631 [2024-11-19 11:32:31.815187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.631 [2024-11-19 11:32:31.815194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.631 [2024-11-19 11:32:31.815200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.631 [2024-11-19 11:32:31.815205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.631 [2024-11-19 11:32:31.816863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.631 [2024-11-19 11:32:31.817004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.631 [2024-11-19 11:32:31.817111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.631 [2024-11-19 11:32:31.817112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.631 [2024-11-19 11:32:31.952887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.631 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:18.631 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.631 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:18.631 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.631 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:18.631 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:18.631 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.631 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.631 Malloc1 00:21:18.631 [2024-11-19 11:32:32.067998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.631 Malloc2 00:21:18.631 Malloc3 00:21:18.631 Malloc4 00:21:18.631 Malloc5 00:21:18.631 Malloc6 00:21:18.631 Malloc7 00:21:18.631 Malloc8 00:21:18.631 Malloc9 00:21:18.892 Malloc10 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2318283 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2318283 /var/tmp/bdevperf.sock 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2318283 ']' 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.892 { 00:21:18.892 "params": { 00:21:18.892 "name": "Nvme$subsystem", 00:21:18.892 "trtype": "$TEST_TRANSPORT", 00:21:18.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.892 "adrfam": "ipv4", 00:21:18.892 "trsvcid": "$NVMF_PORT", 00:21:18.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.892 "hdgst": ${hdgst:-false}, 00:21:18.892 "ddgst": ${ddgst:-false} 00:21:18.892 }, 00:21:18.892 "method": "bdev_nvme_attach_controller" 00:21:18.892 } 00:21:18.892 EOF 00:21:18.892 )") 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.892 { 00:21:18.892 "params": { 00:21:18.892 "name": "Nvme$subsystem", 00:21:18.892 "trtype": "$TEST_TRANSPORT", 00:21:18.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.892 "adrfam": "ipv4", 00:21:18.892 "trsvcid": "$NVMF_PORT", 00:21:18.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.892 "hdgst": ${hdgst:-false}, 00:21:18.892 "ddgst": ${ddgst:-false} 00:21:18.892 }, 00:21:18.892 "method": "bdev_nvme_attach_controller" 00:21:18.892 } 00:21:18.892 EOF 00:21:18.892 )") 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.892 { 00:21:18.892 "params": { 00:21:18.892 "name": "Nvme$subsystem", 00:21:18.892 "trtype": "$TEST_TRANSPORT", 00:21:18.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.892 "adrfam": "ipv4", 00:21:18.892 "trsvcid": "$NVMF_PORT", 00:21:18.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.892 "hdgst": ${hdgst:-false}, 00:21:18.892 "ddgst": ${ddgst:-false} 00:21:18.892 }, 00:21:18.892 "method": "bdev_nvme_attach_controller" 00:21:18.892 } 00:21:18.892 EOF 00:21:18.892 )") 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.892 { 00:21:18.892 "params": { 00:21:18.892 "name": "Nvme$subsystem", 00:21:18.892 "trtype": "$TEST_TRANSPORT", 00:21:18.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.892 "adrfam": "ipv4", 00:21:18.892 "trsvcid": "$NVMF_PORT", 00:21:18.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.892 "hdgst": ${hdgst:-false}, 00:21:18.892 "ddgst": ${ddgst:-false} 00:21:18.892 }, 00:21:18.892 "method": "bdev_nvme_attach_controller" 00:21:18.892 } 00:21:18.892 EOF 00:21:18.892 )") 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.892 { 00:21:18.892 "params": { 00:21:18.892 "name": "Nvme$subsystem", 00:21:18.892 "trtype": "$TEST_TRANSPORT", 00:21:18.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.892 "adrfam": "ipv4", 00:21:18.892 "trsvcid": "$NVMF_PORT", 00:21:18.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.892 "hdgst": ${hdgst:-false}, 00:21:18.892 "ddgst": ${ddgst:-false} 00:21:18.892 }, 00:21:18.892 "method": "bdev_nvme_attach_controller" 00:21:18.892 } 00:21:18.892 EOF 00:21:18.892 )") 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.892 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.892 { 00:21:18.892 "params": { 00:21:18.892 "name": "Nvme$subsystem", 00:21:18.892 "trtype": "$TEST_TRANSPORT", 00:21:18.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "$NVMF_PORT", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.893 "hdgst": ${hdgst:-false}, 00:21:18.893 "ddgst": ${ddgst:-false} 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 } 00:21:18.893 EOF 00:21:18.893 )") 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.893 { 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme$subsystem", 00:21:18.893 "trtype": "$TEST_TRANSPORT", 00:21:18.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "$NVMF_PORT", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.893 "hdgst": ${hdgst:-false}, 00:21:18.893 "ddgst": ${ddgst:-false} 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 } 00:21:18.893 EOF 00:21:18.893 )") 00:21:18.893 [2024-11-19 11:32:32.538744] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:18.893 [2024-11-19 11:32:32.538793] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.893 { 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme$subsystem", 00:21:18.893 "trtype": "$TEST_TRANSPORT", 00:21:18.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "$NVMF_PORT", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.893 "hdgst": ${hdgst:-false}, 00:21:18.893 "ddgst": ${ddgst:-false} 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 } 00:21:18.893 EOF 00:21:18.893 )") 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.893 { 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme$subsystem", 00:21:18.893 "trtype": "$TEST_TRANSPORT", 00:21:18.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "$NVMF_PORT", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.893 "hdgst": ${hdgst:-false}, 00:21:18.893 "ddgst": ${ddgst:-false} 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 } 00:21:18.893 EOF 00:21:18.893 )") 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.893 { 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme$subsystem", 00:21:18.893 "trtype": "$TEST_TRANSPORT", 00:21:18.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "$NVMF_PORT", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.893 "hdgst": ${hdgst:-false}, 00:21:18.893 "ddgst": ${ddgst:-false} 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 } 00:21:18.893 EOF 00:21:18.893 )") 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:18.893 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme1", 00:21:18.893 "trtype": "tcp", 00:21:18.893 "traddr": "10.0.0.2", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "4420", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.893 "hdgst": false, 00:21:18.893 "ddgst": false 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 },{ 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme2", 00:21:18.893 "trtype": "tcp", 00:21:18.893 "traddr": "10.0.0.2", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "4420", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:18.893 "hdgst": false, 00:21:18.893 "ddgst": false 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 },{ 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme3", 00:21:18.893 "trtype": "tcp", 00:21:18.893 "traddr": "10.0.0.2", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "4420", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:18.893 "hdgst": false, 00:21:18.893 "ddgst": false 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 },{ 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme4", 00:21:18.893 "trtype": "tcp", 00:21:18.893 "traddr": "10.0.0.2", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "4420", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:18.893 "hdgst": false, 00:21:18.893 "ddgst": false 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 },{ 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme5", 00:21:18.893 "trtype": "tcp", 00:21:18.893 "traddr": "10.0.0.2", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "4420", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:18.893 "hdgst": false, 00:21:18.893 "ddgst": false 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 },{ 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme6", 00:21:18.893 "trtype": "tcp", 00:21:18.893 "traddr": "10.0.0.2", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "4420", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:18.893 "hdgst": false, 00:21:18.893 "ddgst": false 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 },{ 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme7", 00:21:18.893 "trtype": "tcp", 00:21:18.893 "traddr": "10.0.0.2", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "4420", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:18.893 "hdgst": false, 00:21:18.893 "ddgst": false 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 },{ 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme8", 00:21:18.893 "trtype": "tcp", 00:21:18.893 "traddr": "10.0.0.2", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "4420", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:18.893 "hdgst": false, 00:21:18.893 "ddgst": false 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 },{ 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme9", 00:21:18.893 "trtype": "tcp", 00:21:18.893 "traddr": "10.0.0.2", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "4420", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:18.893 "hdgst": false, 00:21:18.893 "ddgst": false 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 },{ 00:21:18.893 "params": { 00:21:18.893 "name": "Nvme10", 00:21:18.893 "trtype": "tcp", 00:21:18.893 "traddr": "10.0.0.2", 00:21:18.893 "adrfam": "ipv4", 00:21:18.893 "trsvcid": "4420", 00:21:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:18.893 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:18.893 "hdgst": false, 00:21:18.893 "ddgst": false 00:21:18.893 }, 00:21:18.893 "method": "bdev_nvme_attach_controller" 00:21:18.893 }' 00:21:18.894 [2024-11-19 11:32:32.616541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.894 [2024-11-19 11:32:32.658352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.802 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.802 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:20.802 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:20.802 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.802 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:20.802 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.802 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2318283 00:21:20.802 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:20.802 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:21.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2318283 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2318223 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.742 { 00:21:21.742 "params": { 00:21:21.742 "name": "Nvme$subsystem", 00:21:21.742 "trtype": "$TEST_TRANSPORT", 00:21:21.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.742 "adrfam": "ipv4", 00:21:21.742 "trsvcid": "$NVMF_PORT", 00:21:21.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.742 "hdgst": ${hdgst:-false}, 00:21:21.742 "ddgst": ${ddgst:-false} 00:21:21.742 }, 00:21:21.742 "method": "bdev_nvme_attach_controller" 00:21:21.742 } 00:21:21.742 EOF 00:21:21.742 )") 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.742 { 00:21:21.742 "params": { 00:21:21.742 "name": "Nvme$subsystem", 00:21:21.742 "trtype": "$TEST_TRANSPORT", 00:21:21.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.742 "adrfam": "ipv4", 00:21:21.742 "trsvcid": "$NVMF_PORT", 00:21:21.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.742 "hdgst": ${hdgst:-false}, 00:21:21.742 "ddgst": ${ddgst:-false} 00:21:21.742 }, 00:21:21.742 "method": "bdev_nvme_attach_controller" 00:21:21.742 } 00:21:21.742 EOF 00:21:21.742 )") 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.742 { 00:21:21.742 "params": { 00:21:21.742 "name": "Nvme$subsystem", 00:21:21.742 "trtype": "$TEST_TRANSPORT", 00:21:21.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.742 "adrfam": "ipv4", 00:21:21.742 "trsvcid": "$NVMF_PORT", 00:21:21.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.742 "hdgst": ${hdgst:-false}, 00:21:21.742 "ddgst": ${ddgst:-false} 00:21:21.742 }, 00:21:21.742 "method": "bdev_nvme_attach_controller" 00:21:21.742 } 00:21:21.742 EOF 00:21:21.742 )") 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.742 { 00:21:21.742 "params": { 00:21:21.742 "name": "Nvme$subsystem", 00:21:21.742 "trtype": "$TEST_TRANSPORT", 00:21:21.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.742 "adrfam": "ipv4", 00:21:21.742 "trsvcid": "$NVMF_PORT", 00:21:21.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.742 "hdgst": ${hdgst:-false}, 00:21:21.742 "ddgst": ${ddgst:-false} 00:21:21.742 }, 00:21:21.742 "method": "bdev_nvme_attach_controller" 00:21:21.742 } 00:21:21.742 EOF 00:21:21.742 )") 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.742 { 00:21:21.742 "params": { 00:21:21.742 "name": "Nvme$subsystem", 00:21:21.742 "trtype": "$TEST_TRANSPORT", 00:21:21.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.742 "adrfam": "ipv4", 00:21:21.742 "trsvcid": "$NVMF_PORT", 00:21:21.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.742 "hdgst": ${hdgst:-false}, 00:21:21.742 "ddgst": ${ddgst:-false} 00:21:21.742 }, 00:21:21.742 "method": "bdev_nvme_attach_controller" 00:21:21.742 } 00:21:21.742 EOF 00:21:21.742 )") 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.742 { 00:21:21.742 "params": { 00:21:21.742 "name": "Nvme$subsystem", 00:21:21.742 "trtype": "$TEST_TRANSPORT", 00:21:21.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.742 "adrfam": "ipv4", 00:21:21.742 "trsvcid": "$NVMF_PORT", 00:21:21.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.742 "hdgst": ${hdgst:-false}, 00:21:21.742 "ddgst": ${ddgst:-false} 00:21:21.742 }, 00:21:21.742 "method": "bdev_nvme_attach_controller" 00:21:21.742 } 00:21:21.742 EOF 00:21:21.742 )") 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.742 { 00:21:21.742 "params": { 00:21:21.742 "name": "Nvme$subsystem", 00:21:21.742 "trtype": "$TEST_TRANSPORT", 00:21:21.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.742 "adrfam": "ipv4", 00:21:21.742 "trsvcid": "$NVMF_PORT", 00:21:21.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.742 "hdgst": ${hdgst:-false}, 00:21:21.742 "ddgst": ${ddgst:-false} 00:21:21.742 }, 00:21:21.742 "method": "bdev_nvme_attach_controller" 00:21:21.742 } 00:21:21.742 EOF 00:21:21.742 )") 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:21.742 [2024-11-19 11:32:35.478656] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:21.742 [2024-11-19 11:32:35.478704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318808 ] 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.742 { 00:21:21.742 "params": { 00:21:21.742 "name": "Nvme$subsystem", 00:21:21.742 "trtype": "$TEST_TRANSPORT", 00:21:21.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.742 "adrfam": "ipv4", 00:21:21.742 "trsvcid": "$NVMF_PORT", 00:21:21.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.742 "hdgst": ${hdgst:-false}, 00:21:21.742 "ddgst": ${ddgst:-false} 00:21:21.742 }, 00:21:21.742 "method": "bdev_nvme_attach_controller" 00:21:21.742 } 00:21:21.742 EOF 00:21:21.742 )") 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.742 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.742 { 00:21:21.742 "params": { 00:21:21.742 "name": "Nvme$subsystem", 00:21:21.742 "trtype": "$TEST_TRANSPORT", 00:21:21.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "$NVMF_PORT", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.743 "hdgst": ${hdgst:-false}, 00:21:21.743 "ddgst": ${ddgst:-false} 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 } 00:21:21.743 EOF 00:21:21.743 )") 00:21:21.743 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:21.743 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.743 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.743 { 00:21:21.743 "params": { 00:21:21.743 "name": "Nvme$subsystem", 00:21:21.743 "trtype": "$TEST_TRANSPORT", 00:21:21.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "$NVMF_PORT", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.743 "hdgst": ${hdgst:-false}, 00:21:21.743 "ddgst": ${ddgst:-false} 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 } 00:21:21.743 EOF 00:21:21.743 )") 00:21:21.743 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:21.743 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:21.743 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:21.743 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:21.743 "params": { 00:21:21.743 "name": "Nvme1", 00:21:21.743 "trtype": "tcp", 00:21:21.743 "traddr": "10.0.0.2", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "4420", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.743 "hdgst": false, 00:21:21.743 "ddgst": false 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 },{ 00:21:21.743 "params": { 00:21:21.743 "name": "Nvme2", 00:21:21.743 "trtype": "tcp", 00:21:21.743 "traddr": "10.0.0.2", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "4420", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:21.743 "hdgst": false, 00:21:21.743 "ddgst": false 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 },{ 00:21:21.743 "params": { 00:21:21.743 "name": "Nvme3", 00:21:21.743 "trtype": "tcp", 00:21:21.743 "traddr": "10.0.0.2", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "4420", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:21.743 "hdgst": false, 00:21:21.743 "ddgst": false 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 },{ 00:21:21.743 "params": { 00:21:21.743 "name": "Nvme4", 00:21:21.743 "trtype": "tcp", 00:21:21.743 "traddr": "10.0.0.2", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "4420", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:21.743 "hdgst": false, 00:21:21.743 "ddgst": false 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 },{ 00:21:21.743 "params": { 00:21:21.743 "name": "Nvme5", 00:21:21.743 "trtype": "tcp", 00:21:21.743 "traddr": "10.0.0.2", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "4420", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:21.743 "hdgst": false, 00:21:21.743 "ddgst": false 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 },{ 00:21:21.743 "params": { 00:21:21.743 "name": "Nvme6", 00:21:21.743 "trtype": "tcp", 00:21:21.743 "traddr": "10.0.0.2", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "4420", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:21.743 "hdgst": false, 00:21:21.743 "ddgst": false 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 },{ 00:21:21.743 "params": { 00:21:21.743 "name": "Nvme7", 00:21:21.743 "trtype": "tcp", 00:21:21.743 "traddr": "10.0.0.2", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "4420", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:21.743 "hdgst": false, 00:21:21.743 "ddgst": false 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 },{ 00:21:21.743 "params": { 00:21:21.743 "name": "Nvme8", 00:21:21.743 "trtype": "tcp", 00:21:21.743 "traddr": "10.0.0.2", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "4420", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:21.743 "hdgst": false, 00:21:21.743 "ddgst": false 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 },{ 00:21:21.743 "params": { 00:21:21.743 "name": "Nvme9", 00:21:21.743 "trtype": "tcp", 00:21:21.743 "traddr": "10.0.0.2", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "4420", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:21.743 "hdgst": false, 00:21:21.743 "ddgst": false 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 },{ 00:21:21.743 "params": { 00:21:21.743 "name": "Nvme10", 00:21:21.743 "trtype": "tcp", 00:21:21.743 "traddr": "10.0.0.2", 00:21:21.743 "adrfam": "ipv4", 00:21:21.743 "trsvcid": "4420", 00:21:21.743 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:21.743 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:21.743 "hdgst": false, 00:21:21.743 "ddgst": false 00:21:21.743 }, 00:21:21.743 "method": "bdev_nvme_attach_controller" 00:21:21.743 }' 00:21:22.003 [2024-11-19 11:32:35.556172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.003 [2024-11-19 11:32:35.597505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.383 Running I/O for 1 seconds... 00:21:24.664 2204.00 IOPS, 137.75 MiB/s 00:21:24.664 Latency(us) 00:21:24.664 [2024-11-19T10:32:38.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.664 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.664 Verification LBA range: start 0x0 length 0x400 00:21:24.664 Nvme1n1 : 1.13 291.25 18.20 0.00 0.00 213286.73 9232.03 219745.06 00:21:24.664 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.664 Verification LBA range: start 0x0 length 0x400 00:21:24.664 Nvme2n1 : 1.10 236.58 14.79 0.00 0.00 258867.64 17324.30 232510.33 00:21:24.664 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.664 Verification LBA range: start 0x0 length 0x400 00:21:24.664 Nvme3n1 : 1.14 279.80 17.49 0.00 0.00 220035.87 16298.52 215186.03 00:21:24.664 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.664 Verification LBA range: start 0x0 length 0x400 00:21:24.664 Nvme4n1 : 1.13 287.50 17.97 0.00 0.00 207111.47 15044.79 219745.06 00:21:24.664 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.664 Verification LBA range: start 0x0 length 0x400 00:21:24.664 Nvme5n1 : 1.15 277.74 17.36 0.00 0.00 215345.78 15842.62 235245.75 00:21:24.664 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.664 Verification LBA range: start 0x0 length 0x400 00:21:24.664 Nvme6n1 : 1.10 233.32 14.58 0.00 0.00 251576.77 17894.18 246187.41 00:21:24.664 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.664 Verification LBA range: start 0x0 length 0x400 00:21:24.665 Nvme7n1 : 1.14 281.00 17.56 0.00 0.00 206321.04 14019.01 219745.06 00:21:24.665 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.665 Verification LBA range: start 0x0 length 0x400 00:21:24.665 Nvme8n1 : 1.15 282.16 17.63 0.00 0.00 202265.43 1460.31 230686.72 00:21:24.665 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.665 Verification LBA range: start 0x0 length 0x400 00:21:24.665 Nvme9n1 : 1.20 271.53 16.97 0.00 0.00 200897.96 9118.05 224304.08 00:21:24.665 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.665 Verification LBA range: start 0x0 length 0x400 00:21:24.665 Nvme10n1 : 1.16 276.31 17.27 0.00 0.00 200781.38 12195.39 246187.41 00:21:24.665 [2024-11-19T10:32:38.446Z] =================================================================================================================== 00:21:24.665 [2024-11-19T10:32:38.446Z] Total : 2717.19 169.82 0.00 0.00 216073.74 1460.31 246187.41 00:21:24.967 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:24.967 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.968 rmmod nvme_tcp 00:21:24.968 rmmod nvme_fabrics 00:21:24.968 rmmod nvme_keyring 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2318223 ']' 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2318223 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2318223 ']' 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2318223 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2318223 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2318223' 00:21:24.968 killing process with pid 2318223 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2318223 00:21:24.968 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2318223 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.303 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.845 00:21:27.845 real 0m15.441s 00:21:27.845 user 0m34.768s 00:21:27.845 sys 0m5.881s 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.845 ************************************ 00:21:27.845 END TEST nvmf_shutdown_tc1 00:21:27.845 ************************************ 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:27.845 ************************************ 00:21:27.845 START TEST nvmf_shutdown_tc2 00:21:27.845 ************************************ 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.845 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:27.846 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:27.846 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:27.846 Found net devices under 0000:86:00.0: cvl_0_0 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:27.846 Found net devices under 0000:86:00.1: cvl_0_1 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:21:27.846 00:21:27.846 --- 10.0.0.2 ping statistics --- 00:21:27.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.846 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:21:27.846 00:21:27.846 --- 10.0.0.1 ping statistics --- 00:21:27.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.846 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2320020 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2320020 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:27.846 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2320020 ']' 00:21:27.847 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.847 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.847 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.847 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.847 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.847 [2024-11-19 11:32:41.528728] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:27.847 [2024-11-19 11:32:41.528770] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.847 [2024-11-19 11:32:41.607645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:28.106 [2024-11-19 11:32:41.650401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.106 [2024-11-19 11:32:41.650441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.106 [2024-11-19 11:32:41.650448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.106 [2024-11-19 11:32:41.650454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.106 [2024-11-19 11:32:41.650459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.106 [2024-11-19 11:32:41.652078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.106 [2024-11-19 11:32:41.652184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.106 [2024-11-19 11:32:41.652302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.106 [2024-11-19 11:32:41.652302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.674 [2024-11-19 11:32:42.410336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:28.674 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.675 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:28.675 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.675 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:28.675 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.675 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:28.675 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.675 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:28.675 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.675 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:28.934 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.934 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:28.934 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.934 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:28.934 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.934 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:28.934 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:28.934 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.934 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.934 Malloc1 00:21:28.934 [2024-11-19 11:32:42.525088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.934 Malloc2 00:21:28.934 Malloc3 00:21:28.934 Malloc4 00:21:28.934 Malloc5 00:21:29.194 Malloc6 00:21:29.194 Malloc7 00:21:29.194 Malloc8 00:21:29.194 Malloc9 00:21:29.194 Malloc10 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2320297 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2320297 /var/tmp/bdevperf.sock 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2320297 ']' 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.194 { 00:21:29.194 "params": { 00:21:29.194 "name": "Nvme$subsystem", 00:21:29.194 "trtype": "$TEST_TRANSPORT", 00:21:29.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.194 "adrfam": "ipv4", 00:21:29.194 "trsvcid": "$NVMF_PORT", 00:21:29.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.194 "hdgst": ${hdgst:-false}, 00:21:29.194 "ddgst": ${ddgst:-false} 00:21:29.194 }, 00:21:29.194 "method": "bdev_nvme_attach_controller" 00:21:29.194 } 00:21:29.194 EOF 00:21:29.194 )") 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.194 { 00:21:29.194 "params": { 00:21:29.194 "name": "Nvme$subsystem", 00:21:29.194 "trtype": "$TEST_TRANSPORT", 00:21:29.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.194 "adrfam": "ipv4", 00:21:29.194 "trsvcid": "$NVMF_PORT", 00:21:29.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.194 "hdgst": ${hdgst:-false}, 00:21:29.194 "ddgst": ${ddgst:-false} 00:21:29.194 }, 00:21:29.194 "method": "bdev_nvme_attach_controller" 00:21:29.194 } 00:21:29.194 EOF 00:21:29.194 )") 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.194 { 00:21:29.194 "params": { 00:21:29.194 "name": "Nvme$subsystem", 00:21:29.194 "trtype": "$TEST_TRANSPORT", 00:21:29.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.194 "adrfam": "ipv4", 00:21:29.194 "trsvcid": "$NVMF_PORT", 00:21:29.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.194 "hdgst": ${hdgst:-false}, 00:21:29.194 "ddgst": ${ddgst:-false} 00:21:29.194 }, 00:21:29.194 "method": "bdev_nvme_attach_controller" 00:21:29.194 } 00:21:29.194 EOF 00:21:29.194 )") 00:21:29.194 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.455 { 00:21:29.455 "params": { 00:21:29.455 "name": "Nvme$subsystem", 00:21:29.455 "trtype": "$TEST_TRANSPORT", 00:21:29.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.455 "adrfam": "ipv4", 00:21:29.455 "trsvcid": "$NVMF_PORT", 00:21:29.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.455 "hdgst": ${hdgst:-false}, 00:21:29.455 "ddgst": ${ddgst:-false} 00:21:29.455 }, 00:21:29.455 "method": "bdev_nvme_attach_controller" 00:21:29.455 } 00:21:29.455 EOF 00:21:29.455 )") 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.455 { 00:21:29.455 "params": { 00:21:29.455 "name": "Nvme$subsystem", 00:21:29.455 "trtype": "$TEST_TRANSPORT", 00:21:29.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.455 "adrfam": "ipv4", 00:21:29.455 "trsvcid": "$NVMF_PORT", 00:21:29.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.455 "hdgst": ${hdgst:-false}, 00:21:29.455 "ddgst": ${ddgst:-false} 00:21:29.455 }, 00:21:29.455 "method": "bdev_nvme_attach_controller" 00:21:29.455 } 00:21:29.455 EOF 00:21:29.455 )") 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.455 { 00:21:29.455 "params": { 00:21:29.455 "name": "Nvme$subsystem", 00:21:29.455 "trtype": "$TEST_TRANSPORT", 00:21:29.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.455 "adrfam": "ipv4", 00:21:29.455 "trsvcid": "$NVMF_PORT", 00:21:29.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.455 "hdgst": ${hdgst:-false}, 00:21:29.455 "ddgst": ${ddgst:-false} 00:21:29.455 }, 00:21:29.455 "method": "bdev_nvme_attach_controller" 00:21:29.455 } 00:21:29.455 EOF 00:21:29.455 )") 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.455 { 00:21:29.455 "params": { 00:21:29.455 "name": "Nvme$subsystem", 00:21:29.455 "trtype": "$TEST_TRANSPORT", 00:21:29.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.455 "adrfam": "ipv4", 00:21:29.455 "trsvcid": "$NVMF_PORT", 00:21:29.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.455 "hdgst": ${hdgst:-false}, 00:21:29.455 "ddgst": ${ddgst:-false} 00:21:29.455 }, 00:21:29.455 "method": "bdev_nvme_attach_controller" 00:21:29.455 } 00:21:29.455 EOF 00:21:29.455 )") 00:21:29.455 [2024-11-19 11:32:42.997421] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:29.455 [2024-11-19 11:32:42.997470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320297 ] 00:21:29.455 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.455 { 00:21:29.455 "params": { 00:21:29.455 "name": "Nvme$subsystem", 00:21:29.455 "trtype": "$TEST_TRANSPORT", 00:21:29.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.455 "adrfam": "ipv4", 00:21:29.455 "trsvcid": "$NVMF_PORT", 00:21:29.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.455 "hdgst": ${hdgst:-false}, 00:21:29.455 "ddgst": ${ddgst:-false} 00:21:29.455 }, 00:21:29.455 "method": "bdev_nvme_attach_controller" 00:21:29.455 } 00:21:29.455 EOF 00:21:29.455 )") 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.455 { 00:21:29.455 "params": { 00:21:29.455 "name": "Nvme$subsystem", 00:21:29.455 "trtype": "$TEST_TRANSPORT", 00:21:29.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.455 "adrfam": "ipv4", 00:21:29.455 "trsvcid": "$NVMF_PORT", 00:21:29.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.455 "hdgst": ${hdgst:-false}, 00:21:29.455 "ddgst": ${ddgst:-false} 00:21:29.455 }, 00:21:29.455 "method": "bdev_nvme_attach_controller" 00:21:29.455 } 00:21:29.455 EOF 00:21:29.455 )") 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.455 { 00:21:29.455 "params": { 00:21:29.455 "name": "Nvme$subsystem", 00:21:29.455 "trtype": "$TEST_TRANSPORT", 00:21:29.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.455 "adrfam": "ipv4", 00:21:29.455 "trsvcid": "$NVMF_PORT", 00:21:29.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.455 "hdgst": ${hdgst:-false}, 00:21:29.455 "ddgst": ${ddgst:-false} 00:21:29.455 }, 00:21:29.455 "method": "bdev_nvme_attach_controller" 00:21:29.455 } 00:21:29.455 EOF 00:21:29.455 )") 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:29.455 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:29.455 "params": { 00:21:29.455 "name": "Nvme1", 00:21:29.455 "trtype": "tcp", 00:21:29.455 "traddr": "10.0.0.2", 00:21:29.455 "adrfam": "ipv4", 00:21:29.455 "trsvcid": "4420", 00:21:29.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.455 "hdgst": false, 00:21:29.455 "ddgst": false 00:21:29.455 }, 00:21:29.455 "method": "bdev_nvme_attach_controller" 00:21:29.455 },{ 00:21:29.455 "params": { 00:21:29.455 "name": "Nvme2", 00:21:29.455 "trtype": "tcp", 00:21:29.455 "traddr": "10.0.0.2", 00:21:29.455 "adrfam": "ipv4", 00:21:29.455 "trsvcid": "4420", 00:21:29.455 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.455 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:29.455 "hdgst": false, 00:21:29.455 "ddgst": false 00:21:29.456 }, 00:21:29.456 "method": "bdev_nvme_attach_controller" 00:21:29.456 },{ 00:21:29.456 "params": { 00:21:29.456 "name": "Nvme3", 00:21:29.456 "trtype": "tcp", 00:21:29.456 "traddr": "10.0.0.2", 00:21:29.456 "adrfam": "ipv4", 00:21:29.456 "trsvcid": "4420", 00:21:29.456 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:29.456 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:29.456 "hdgst": false, 00:21:29.456 "ddgst": false 00:21:29.456 }, 00:21:29.456 "method": "bdev_nvme_attach_controller" 00:21:29.456 },{ 00:21:29.456 "params": { 00:21:29.456 "name": "Nvme4", 00:21:29.456 "trtype": "tcp", 00:21:29.456 "traddr": "10.0.0.2", 00:21:29.456 "adrfam": "ipv4", 00:21:29.456 "trsvcid": "4420", 00:21:29.456 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:29.456 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:29.456 "hdgst": false, 00:21:29.456 "ddgst": false 00:21:29.456 }, 00:21:29.456 "method": "bdev_nvme_attach_controller" 00:21:29.456 },{ 00:21:29.456 "params": { 00:21:29.456 "name": "Nvme5", 00:21:29.456 "trtype": "tcp", 00:21:29.456 "traddr": "10.0.0.2", 00:21:29.456 "adrfam": "ipv4", 00:21:29.456 "trsvcid": "4420", 00:21:29.456 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:29.456 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:29.456 "hdgst": false, 00:21:29.456 "ddgst": false 00:21:29.456 }, 00:21:29.456 "method": "bdev_nvme_attach_controller" 00:21:29.456 },{ 00:21:29.456 "params": { 00:21:29.456 "name": "Nvme6", 00:21:29.456 "trtype": "tcp", 00:21:29.456 "traddr": "10.0.0.2", 00:21:29.456 "adrfam": "ipv4", 00:21:29.456 "trsvcid": "4420", 00:21:29.456 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:29.456 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:29.456 "hdgst": false, 00:21:29.456 "ddgst": false 00:21:29.456 }, 00:21:29.456 "method": "bdev_nvme_attach_controller" 00:21:29.456 },{ 00:21:29.456 "params": { 00:21:29.456 "name": "Nvme7", 00:21:29.456 "trtype": "tcp", 00:21:29.456 "traddr": "10.0.0.2", 00:21:29.456 "adrfam": "ipv4", 00:21:29.456 "trsvcid": "4420", 00:21:29.456 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:29.456 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:29.456 "hdgst": false, 00:21:29.456 "ddgst": false 00:21:29.456 }, 00:21:29.456 "method": "bdev_nvme_attach_controller" 00:21:29.456 },{ 00:21:29.456 "params": { 00:21:29.456 "name": "Nvme8", 00:21:29.456 "trtype": "tcp", 00:21:29.456 "traddr": "10.0.0.2", 00:21:29.456 "adrfam": "ipv4", 00:21:29.456 "trsvcid": "4420", 00:21:29.456 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:29.456 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:29.456 "hdgst": false, 00:21:29.456 "ddgst": false 00:21:29.456 }, 00:21:29.456 "method": "bdev_nvme_attach_controller" 00:21:29.456 },{ 00:21:29.456 "params": { 00:21:29.456 "name": "Nvme9", 00:21:29.456 "trtype": "tcp", 00:21:29.456 "traddr": "10.0.0.2", 00:21:29.456 "adrfam": "ipv4", 00:21:29.456 "trsvcid": "4420", 00:21:29.456 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:29.456 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:29.456 "hdgst": false, 00:21:29.456 "ddgst": false 00:21:29.456 }, 00:21:29.456 "method": "bdev_nvme_attach_controller" 00:21:29.456 },{ 00:21:29.456 "params": { 00:21:29.456 "name": "Nvme10", 00:21:29.456 "trtype": "tcp", 00:21:29.456 "traddr": "10.0.0.2", 00:21:29.456 "adrfam": "ipv4", 00:21:29.456 "trsvcid": "4420", 00:21:29.456 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:29.456 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:29.456 "hdgst": false, 00:21:29.456 "ddgst": false 00:21:29.456 }, 00:21:29.456 "method": "bdev_nvme_attach_controller" 00:21:29.456 }' 00:21:29.456 [2024-11-19 11:32:43.074975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.456 [2024-11-19 11:32:43.116604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.834 Running I/O for 10 seconds... 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2320297 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2320297 ']' 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2320297 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2320297 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2320297' 00:21:31.402 killing process with pid 2320297 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2320297 00:21:31.402 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2320297 00:21:31.402 Received shutdown signal, test time was about 0.729135 seconds 00:21:31.402 00:21:31.402 Latency(us) 00:21:31.402 [2024-11-19T10:32:45.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.402 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.402 Verification LBA range: start 0x0 length 0x400 00:21:31.402 Nvme1n1 : 0.71 269.62 16.85 0.00 0.00 233510.59 18464.06 222480.47 00:21:31.402 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.402 Verification LBA range: start 0x0 length 0x400 00:21:31.402 Nvme2n1 : 0.72 266.08 16.63 0.00 0.00 230437.10 17210.32 221568.67 00:21:31.402 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.402 Verification LBA range: start 0x0 length 0x400 00:21:31.402 Nvme3n1 : 0.69 277.23 17.33 0.00 0.00 214477.84 15272.74 213362.42 00:21:31.402 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.402 Verification LBA range: start 0x0 length 0x400 00:21:31.402 Nvme4n1 : 0.70 275.81 17.24 0.00 0.00 209389.89 16070.57 213362.42 00:21:31.402 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.402 Verification LBA range: start 0x0 length 0x400 00:21:31.402 Nvme5n1 : 0.71 268.61 16.79 0.00 0.00 209946.42 17324.30 208803.39 00:21:31.402 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.402 Verification LBA range: start 0x0 length 0x400 00:21:31.402 Nvme6n1 : 0.70 273.14 17.07 0.00 0.00 199871.44 26442.35 186920.07 00:21:31.402 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.402 Verification LBA range: start 0x0 length 0x400 00:21:31.402 Nvme7n1 : 0.71 271.54 16.97 0.00 0.00 195663.25 16412.49 214274.23 00:21:31.402 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.402 Verification LBA range: start 0x0 length 0x400 00:21:31.402 Nvme8n1 : 0.72 267.11 16.69 0.00 0.00 193515.22 26670.30 206067.98 00:21:31.402 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.402 Verification LBA range: start 0x0 length 0x400 00:21:31.402 Nvme9n1 : 0.73 263.57 16.47 0.00 0.00 189556.35 17894.18 232510.33 00:21:31.402 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.402 Verification LBA range: start 0x0 length 0x400 00:21:31.402 Nvme10n1 : 0.70 194.39 12.15 0.00 0.00 240873.29 9459.98 238892.97 00:21:31.402 [2024-11-19T10:32:45.183Z] =================================================================================================================== 00:21:31.402 [2024-11-19T10:32:45.183Z] Total : 2627.11 164.19 0.00 0.00 210848.41 9459.98 238892.97 00:21:31.661 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2320020 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.597 rmmod nvme_tcp 00:21:32.597 rmmod nvme_fabrics 00:21:32.597 rmmod nvme_keyring 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2320020 ']' 00:21:32.597 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2320020 00:21:32.598 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2320020 ']' 00:21:32.598 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2320020 00:21:32.598 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:32.598 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.598 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2320020 00:21:32.857 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:32.857 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:32.857 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2320020' 00:21:32.857 killing process with pid 2320020 00:21:32.857 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2320020 00:21:32.857 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2320020 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.116 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:35.651 00:21:35.651 real 0m7.663s 00:21:35.651 user 0m22.620s 00:21:35.651 sys 0m1.325s 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.651 ************************************ 00:21:35.651 END TEST nvmf_shutdown_tc2 00:21:35.651 ************************************ 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:35.651 ************************************ 00:21:35.651 START TEST nvmf_shutdown_tc3 00:21:35.651 ************************************ 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.651 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:35.652 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:35.652 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:35.652 Found net devices under 0000:86:00.0: cvl_0_0 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:35.652 Found net devices under 0000:86:00.1: cvl_0_1 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.652 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:21:35.652 00:21:35.652 --- 10.0.0.2 ping statistics --- 00:21:35.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.652 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:21:35.652 00:21:35.652 --- 10.0.0.1 ping statistics --- 00:21:35.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.652 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2321342 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2321342 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2321342 ']' 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.652 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.652 [2024-11-19 11:32:49.252805] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:35.652 [2024-11-19 11:32:49.252850] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.652 [2024-11-19 11:32:49.333594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.652 [2024-11-19 11:32:49.376763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.652 [2024-11-19 11:32:49.376800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.652 [2024-11-19 11:32:49.376808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.652 [2024-11-19 11:32:49.376814] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.652 [2024-11-19 11:32:49.376819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.652 [2024-11-19 11:32:49.378289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.652 [2024-11-19 11:32:49.378396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.652 [2024-11-19 11:32:49.378503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.652 [2024-11-19 11:32:49.378504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.588 [2024-11-19 11:32:50.141477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.588 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.588 Malloc1 00:21:36.588 [2024-11-19 11:32:50.252409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.588 Malloc2 00:21:36.588 Malloc3 00:21:36.588 Malloc4 00:21:36.846 Malloc5 00:21:36.846 Malloc6 00:21:36.846 Malloc7 00:21:36.846 Malloc8 00:21:36.846 Malloc9 00:21:36.846 Malloc10 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2321627 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2321627 /var/tmp/bdevperf.sock 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2321627 ']' 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.106 { 00:21:37.106 "params": { 00:21:37.106 "name": "Nvme$subsystem", 00:21:37.106 "trtype": "$TEST_TRANSPORT", 00:21:37.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.106 "adrfam": "ipv4", 00:21:37.106 "trsvcid": "$NVMF_PORT", 00:21:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.106 "hdgst": ${hdgst:-false}, 00:21:37.106 "ddgst": ${ddgst:-false} 00:21:37.106 }, 00:21:37.106 "method": "bdev_nvme_attach_controller" 00:21:37.106 } 00:21:37.106 EOF 00:21:37.106 )") 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.106 { 00:21:37.106 "params": { 00:21:37.106 "name": "Nvme$subsystem", 00:21:37.106 "trtype": "$TEST_TRANSPORT", 00:21:37.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.106 "adrfam": "ipv4", 00:21:37.106 "trsvcid": "$NVMF_PORT", 00:21:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.106 "hdgst": ${hdgst:-false}, 00:21:37.106 "ddgst": ${ddgst:-false} 00:21:37.106 }, 00:21:37.106 "method": "bdev_nvme_attach_controller" 00:21:37.106 } 00:21:37.106 EOF 00:21:37.106 )") 00:21:37.106 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.107 { 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme$subsystem", 00:21:37.107 "trtype": "$TEST_TRANSPORT", 00:21:37.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "$NVMF_PORT", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.107 "hdgst": ${hdgst:-false}, 00:21:37.107 "ddgst": ${ddgst:-false} 00:21:37.107 }, 00:21:37.107 "method": "bdev_nvme_attach_controller" 00:21:37.107 } 00:21:37.107 EOF 00:21:37.107 )") 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.107 { 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme$subsystem", 00:21:37.107 "trtype": "$TEST_TRANSPORT", 00:21:37.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "$NVMF_PORT", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.107 "hdgst": ${hdgst:-false}, 00:21:37.107 "ddgst": ${ddgst:-false} 00:21:37.107 }, 00:21:37.107 "method": "bdev_nvme_attach_controller" 00:21:37.107 } 00:21:37.107 EOF 00:21:37.107 )") 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.107 { 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme$subsystem", 00:21:37.107 "trtype": "$TEST_TRANSPORT", 00:21:37.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "$NVMF_PORT", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.107 "hdgst": ${hdgst:-false}, 00:21:37.107 "ddgst": ${ddgst:-false} 00:21:37.107 }, 00:21:37.107 "method": "bdev_nvme_attach_controller" 00:21:37.107 } 00:21:37.107 EOF 00:21:37.107 )") 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.107 { 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme$subsystem", 00:21:37.107 "trtype": "$TEST_TRANSPORT", 00:21:37.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "$NVMF_PORT", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.107 "hdgst": ${hdgst:-false}, 00:21:37.107 "ddgst": ${ddgst:-false} 00:21:37.107 }, 00:21:37.107 "method": "bdev_nvme_attach_controller" 00:21:37.107 } 00:21:37.107 EOF 00:21:37.107 )") 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:37.107 [2024-11-19 11:32:50.730624] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:37.107 [2024-11-19 11:32:50.730671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321627 ] 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.107 { 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme$subsystem", 00:21:37.107 "trtype": "$TEST_TRANSPORT", 00:21:37.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "$NVMF_PORT", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.107 "hdgst": ${hdgst:-false}, 00:21:37.107 "ddgst": ${ddgst:-false} 00:21:37.107 }, 00:21:37.107 "method": "bdev_nvme_attach_controller" 00:21:37.107 } 00:21:37.107 EOF 00:21:37.107 )") 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.107 { 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme$subsystem", 00:21:37.107 "trtype": "$TEST_TRANSPORT", 00:21:37.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "$NVMF_PORT", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.107 "hdgst": ${hdgst:-false}, 00:21:37.107 "ddgst": ${ddgst:-false} 00:21:37.107 }, 00:21:37.107 "method": "bdev_nvme_attach_controller" 00:21:37.107 } 00:21:37.107 EOF 00:21:37.107 )") 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.107 { 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme$subsystem", 00:21:37.107 "trtype": "$TEST_TRANSPORT", 00:21:37.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "$NVMF_PORT", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.107 "hdgst": ${hdgst:-false}, 00:21:37.107 "ddgst": ${ddgst:-false} 00:21:37.107 }, 00:21:37.107 "method": "bdev_nvme_attach_controller" 00:21:37.107 } 00:21:37.107 EOF 00:21:37.107 )") 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.107 { 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme$subsystem", 00:21:37.107 "trtype": "$TEST_TRANSPORT", 00:21:37.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "$NVMF_PORT", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.107 "hdgst": ${hdgst:-false}, 00:21:37.107 "ddgst": ${ddgst:-false} 00:21:37.107 }, 00:21:37.107 "method": "bdev_nvme_attach_controller" 00:21:37.107 } 00:21:37.107 EOF 00:21:37.107 )") 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:37.107 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme1", 00:21:37.107 "trtype": "tcp", 00:21:37.107 "traddr": "10.0.0.2", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "4420", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:37.107 "hdgst": false, 00:21:37.107 "ddgst": false 00:21:37.107 }, 00:21:37.107 "method": "bdev_nvme_attach_controller" 00:21:37.107 },{ 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme2", 00:21:37.107 "trtype": "tcp", 00:21:37.107 "traddr": "10.0.0.2", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "4420", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:37.107 "hdgst": false, 00:21:37.107 "ddgst": false 00:21:37.107 }, 00:21:37.107 "method": "bdev_nvme_attach_controller" 00:21:37.107 },{ 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme3", 00:21:37.107 "trtype": "tcp", 00:21:37.107 "traddr": "10.0.0.2", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "4420", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:37.107 "hdgst": false, 00:21:37.107 "ddgst": false 00:21:37.107 }, 00:21:37.107 "method": "bdev_nvme_attach_controller" 00:21:37.107 },{ 00:21:37.107 "params": { 00:21:37.107 "name": "Nvme4", 00:21:37.107 "trtype": "tcp", 00:21:37.107 "traddr": "10.0.0.2", 00:21:37.107 "adrfam": "ipv4", 00:21:37.107 "trsvcid": "4420", 00:21:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:37.107 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:37.107 "hdgst": false, 00:21:37.107 "ddgst": false 00:21:37.108 }, 00:21:37.108 "method": "bdev_nvme_attach_controller" 00:21:37.108 },{ 00:21:37.108 "params": { 00:21:37.108 "name": "Nvme5", 00:21:37.108 "trtype": "tcp", 00:21:37.108 "traddr": "10.0.0.2", 00:21:37.108 "adrfam": "ipv4", 00:21:37.108 "trsvcid": "4420", 00:21:37.108 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:37.108 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:37.108 "hdgst": false, 00:21:37.108 "ddgst": false 00:21:37.108 }, 00:21:37.108 "method": "bdev_nvme_attach_controller" 00:21:37.108 },{ 00:21:37.108 "params": { 00:21:37.108 "name": "Nvme6", 00:21:37.108 "trtype": "tcp", 00:21:37.108 "traddr": "10.0.0.2", 00:21:37.108 "adrfam": "ipv4", 00:21:37.108 "trsvcid": "4420", 00:21:37.108 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:37.108 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:37.108 "hdgst": false, 00:21:37.108 "ddgst": false 00:21:37.108 }, 00:21:37.108 "method": "bdev_nvme_attach_controller" 00:21:37.108 },{ 00:21:37.108 "params": { 00:21:37.108 "name": "Nvme7", 00:21:37.108 "trtype": "tcp", 00:21:37.108 "traddr": "10.0.0.2", 00:21:37.108 "adrfam": "ipv4", 00:21:37.108 "trsvcid": "4420", 00:21:37.108 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:37.108 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:37.108 "hdgst": false, 00:21:37.108 "ddgst": false 00:21:37.108 }, 00:21:37.108 "method": "bdev_nvme_attach_controller" 00:21:37.108 },{ 00:21:37.108 "params": { 00:21:37.108 "name": "Nvme8", 00:21:37.108 "trtype": "tcp", 00:21:37.108 "traddr": "10.0.0.2", 00:21:37.108 "adrfam": "ipv4", 00:21:37.108 "trsvcid": "4420", 00:21:37.108 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:37.108 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:37.108 "hdgst": false, 00:21:37.108 "ddgst": false 00:21:37.108 }, 00:21:37.108 "method": "bdev_nvme_attach_controller" 00:21:37.108 },{ 00:21:37.108 "params": { 00:21:37.108 "name": "Nvme9", 00:21:37.108 "trtype": "tcp", 00:21:37.108 "traddr": "10.0.0.2", 00:21:37.108 "adrfam": "ipv4", 00:21:37.108 "trsvcid": "4420", 00:21:37.108 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:37.108 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:37.108 "hdgst": false, 00:21:37.108 "ddgst": false 00:21:37.108 }, 00:21:37.108 "method": "bdev_nvme_attach_controller" 00:21:37.108 },{ 00:21:37.108 "params": { 00:21:37.108 "name": "Nvme10", 00:21:37.108 "trtype": "tcp", 00:21:37.108 "traddr": "10.0.0.2", 00:21:37.108 "adrfam": "ipv4", 00:21:37.108 "trsvcid": "4420", 00:21:37.108 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:37.108 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:37.108 "hdgst": false, 00:21:37.108 "ddgst": false 00:21:37.108 }, 00:21:37.108 "method": "bdev_nvme_attach_controller" 00:21:37.108 }' 00:21:37.108 [2024-11-19 11:32:50.808593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.108 [2024-11-19 11:32:50.850280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.016 Running I/O for 10 seconds... 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:39.016 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:39.017 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:39.017 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:39.017 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.017 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.017 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.017 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:39.017 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:39.017 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2321342 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2321342 ']' 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2321342 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.285 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2321342 00:21:39.285 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:39.285 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:39.285 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2321342' 00:21:39.285 killing process with pid 2321342 00:21:39.285 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2321342 00:21:39.285 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2321342 00:21:39.285 [2024-11-19 11:32:53.042585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.285 [2024-11-19 11:32:53.042791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.042997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.043003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.043009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.043017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.043023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.043029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.043035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.043042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.043048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.043054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b8180 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.286 [2024-11-19 11:32:53.044433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.044562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5bf0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.045740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.045772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.045783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.045790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.045797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.045805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.045812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.045818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.045825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c61b0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.045854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.045863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.045874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.045881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.045888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.045895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.045902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.045908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.045915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc40 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.045982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.045991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.045999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.046005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.046012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.046019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.046026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.287 [2024-11-19 11:32:53.046033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.046039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c5d50 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.287 [2024-11-19 11:32:53.046333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.046326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.287 [2024-11-19 11:32:53.046351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.046360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.287 [2024-11-19 11:32:53.046368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:32:53.046375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 he state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.287 [2024-11-19 11:32:53.046396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.046403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.287 [2024-11-19 11:32:53.046410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.046417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1[2024-11-19 11:32:53.046425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.287 he state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:32:53.046435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 he state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.287 [2024-11-19 11:32:53.046451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.046458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.287 [2024-11-19 11:32:53.046466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.287 [2024-11-19 11:32:53.046473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.287 [2024-11-19 11:32:53.046479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1[2024-11-19 11:32:53.046480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.287 he state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:32:53.046489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 he state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:32:53.046530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 he state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-11-19 11:32:53.046645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 he state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:32:53.046654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 he state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with t[2024-11-19 11:32:53.046710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:39.288 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with t[2024-11-19 11:32:53.046757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1he state(6) to be set 00:21:39.288 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.288 [2024-11-19 11:32:53.046808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with t[2024-11-19 11:32:53.046808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1he state(6) to be set 00:21:39.288 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.288 [2024-11-19 11:32:53.046817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with t[2024-11-19 11:32:53.046818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:39.288 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.288 [2024-11-19 11:32:53.046827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.289 [2024-11-19 11:32:53.046830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.046834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b60c0 is same with the state(6) to be set 00:21:39.289 [2024-11-19 11:32:53.046838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.046847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.046853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.046861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.046868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.046875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.046882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.046890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.046897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.046905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.046912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.046920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.046926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.046934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.046940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.046956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.046963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.046971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.046977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.046985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.046991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.046999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.289 [2024-11-19 11:32:53.047332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.289 [2024-11-19 11:32:53.047339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.290 [2024-11-19 11:32:53.047346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.290 [2024-11-19 11:32:53.047352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.290 [2024-11-19 11:32:53.047360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.290 [2024-11-19 11:32:53.047369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.290 [2024-11-19 11:32:53.047397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:39.290 [2024-11-19 11:32:53.047999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.048409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b65b0 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.049118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.049144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.049151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.049158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.049164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.049171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.049178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.049184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.049190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.290 [2024-11-19 11:32:53.049195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.049532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6930 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.291 [2024-11-19 11:32:53.050684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:39.292 [2024-11-19 11:32:53.050731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with t[2024-11-19 11:32:53.050750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c61b0 (9): he state(6) to be set 00:21:39.292 Bad file descriptor 00:21:39.292 [2024-11-19 11:32:53.050760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.050894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6e00 is same with the state(6) to be set 00:21:39.292 [2024-11-19 11:32:53.052291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.292 [2024-11-19 11:32:53.052652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.292 [2024-11-19 11:32:53.052658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1[2024-11-19 11:32:53.052762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 he state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:32:53.052771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 he state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with t[2024-11-19 11:32:53.052790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:39.293 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with t[2024-11-19 11:32:53.052851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128he state(6) to be set 00:21:39.293 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with t[2024-11-19 11:32:53.052861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:39.293 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:32:53.052898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 he state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 [2024-11-19 11:32:53.052986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 [2024-11-19 11:32:53.052993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.052999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12[2024-11-19 11:32:53.053000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.293 he state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.053009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:32:53.053009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.293 he state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.053020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.293 [2024-11-19 11:32:53.053021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.294 [2024-11-19 11:32:53.053029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.294 [2024-11-19 11:32:53.053040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.294 [2024-11-19 11:32:53.053048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.294 [2024-11-19 11:32:53.053387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.294 [2024-11-19 11:32:53.053458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:39.294 [2024-11-19 11:32:53.053666] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.294 [2024-11-19 11:32:53.053915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.294 [2024-11-19 11:32:53.053932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c61b0 with addr=10.0.0.2, port=4420 00:21:39.294 [2024-11-19 11:32:53.053940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c61b0 is same with the state(6) to be set 00:21:39.294 [2024-11-19 11:32:53.054000] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.294 [2024-11-19 11:32:53.054049] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.564 [2024-11-19 11:32:53.055197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:39.564 [2024-11-19 11:32:53.055245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17da610 (9): Bad file descriptor 00:21:39.564 [2024-11-19 11:32:53.055258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c61b0 (9): Bad file descriptor 00:21:39.564 [2024-11-19 11:32:53.055516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:39.564 [2024-11-19 11:32:53.055532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:39.564 [2024-11-19 11:32:53.055541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:39.564 [2024-11-19 11:32:53.055550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:39.564 [2024-11-19 11:32:53.055584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.564 [2024-11-19 11:32:53.055856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.564 [2024-11-19 11:32:53.055864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.565 [2024-11-19 11:32:53.055906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.565 [2024-11-19 11:32:53.055971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.565 [2024-11-19 11:32:53.056010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.565 [2024-11-19 11:32:53.056059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.565 [2024-11-19 11:32:53.056103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.565 [2024-11-19 11:32:53.056156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.565 [2024-11-19 11:32:53.056201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.565 [2024-11-19 11:32:53.056251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.565 [2024-11-19 11:32:53.056301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.565 [2024-11-19 11:32:53.056349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.565 [2024-11-19 11:32:53.056397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.565 [2024-11-19 11:32:53.056446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.565 [2024-11-19 11:32:53.056492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.565 [2024-11-19 11:32:53.056540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.565 [2024-11-19 11:32:53.056589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.565 [2024-11-19 11:32:53.056637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.565 [2024-11-19 11:32:53.056683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.565 [2024-11-19 11:32:53.066127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.066292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.565 [2024-11-19 11:32:53.067307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.067414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7c90 is same with the state(6) to be set 00:21:39.566 [2024-11-19 11:32:53.072131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.566 [2024-11-19 11:32:53.072824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.566 [2024-11-19 11:32:53.072834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.072847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.567 [2024-11-19 11:32:53.072857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.072872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.567 [2024-11-19 11:32:53.072882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.072895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.567 [2024-11-19 11:32:53.072904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.072917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.567 [2024-11-19 11:32:53.072927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.072940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.567 [2024-11-19 11:32:53.072957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.072970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.567 [2024-11-19 11:32:53.072980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.072992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc8a10 is same with the state(6) to be set 00:21:39.567 [2024-11-19 11:32:53.073102] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.567 [2024-11-19 11:32:53.073834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.567 [2024-11-19 11:32:53.073862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17da610 with addr=10.0.0.2, port=4420 00:21:39.567 [2024-11-19 11:32:53.073874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17da610 is same with the state(6) to be set 00:21:39.567 [2024-11-19 11:32:53.073894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3cc40 (9): Bad file descriptor 00:21:39.567 [2024-11-19 11:32:53.073941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.073966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.073978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.073988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.073999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3c70 is same with the state(6) to be set 00:21:39.567 [2024-11-19 11:32:53.074073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf14c0 is same with the state(6) to be set 00:21:39.567 [2024-11-19 11:32:53.074196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d36140 is same with the state(6) to be set 00:21:39.567 [2024-11-19 11:32:53.074319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf07a0 is same with the state(6) to be set 00:21:39.567 [2024-11-19 11:32:53.074421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c5d50 (9): Bad file descriptor 00:21:39.567 [2024-11-19 11:32:53.074456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7300 is same with the state(6) to be set 00:21:39.567 [2024-11-19 11:32:53.074575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.567 [2024-11-19 11:32:53.074651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.567 [2024-11-19 11:32:53.074661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d03590 is same with the state(6) to be set 00:21:39.567 [2024-11-19 11:32:53.074699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17da610 (9): Bad file descriptor 00:21:39.567 [2024-11-19 11:32:53.076259] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.567 [2024-11-19 11:32:53.076433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:39.567 [2024-11-19 11:32:53.076459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf07a0 (9): Bad file descriptor 00:21:39.567 [2024-11-19 11:32:53.076655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:39.567 [2024-11-19 11:32:53.076692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:39.567 [2024-11-19 11:32:53.076704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:39.567 [2024-11-19 11:32:53.076716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:39.568 [2024-11-19 11:32:53.076732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:39.568 [2024-11-19 11:32:53.077308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.568 [2024-11-19 11:32:53.077332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf07a0 with addr=10.0.0.2, port=4420 00:21:39.568 [2024-11-19 11:32:53.077344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf07a0 is same with the state(6) to be set 00:21:39.568 [2024-11-19 11:32:53.077495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.568 [2024-11-19 11:32:53.077510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c61b0 with addr=10.0.0.2, port=4420 00:21:39.568 [2024-11-19 11:32:53.077521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c61b0 is same with the state(6) to be set 00:21:39.568 [2024-11-19 11:32:53.077603] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.568 [2024-11-19 11:32:53.077660] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:39.568 [2024-11-19 11:32:53.077684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf07a0 (9): Bad file descriptor 00:21:39.568 [2024-11-19 11:32:53.077699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c61b0 (9): Bad file descriptor 00:21:39.568 [2024-11-19 11:32:53.077788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:39.568 [2024-11-19 11:32:53.077801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:39.568 [2024-11-19 11:32:53.077812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:39.568 [2024-11-19 11:32:53.077822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:39.568 [2024-11-19 11:32:53.077833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:39.568 [2024-11-19 11:32:53.077843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:39.568 [2024-11-19 11:32:53.077853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:39.568 [2024-11-19 11:32:53.077862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:39.568 [2024-11-19 11:32:53.083668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c3c70 (9): Bad file descriptor 00:21:39.568 [2024-11-19 11:32:53.083699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf14c0 (9): Bad file descriptor 00:21:39.568 [2024-11-19 11:32:53.083719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d36140 (9): Bad file descriptor 00:21:39.568 [2024-11-19 11:32:53.083747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce7300 (9): Bad file descriptor 00:21:39.568 [2024-11-19 11:32:53.083766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d03590 (9): Bad file descriptor 00:21:39.568 [2024-11-19 11:32:53.083896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.083909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.083924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.083933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.083959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.083969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.083979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.083988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.083999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.568 [2024-11-19 11:32:53.084351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.568 [2024-11-19 11:32:53.084361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.084988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.084999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.085007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.085018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.085026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.085037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.085046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.085056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.085065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.085076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.085084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.085095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.569 [2024-11-19 11:32:53.085103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.569 [2024-11-19 11:32:53.085114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.085122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.085133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.085141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.085151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acb6a0 is same with the state(6) to be set 00:21:39.570 [2024-11-19 11:32:53.086482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.086981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.086991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.087010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.087029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.087048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.087067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.087086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.087105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.087124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.087143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.087163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.087182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.570 [2024-11-19 11:32:53.087201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.570 [2024-11-19 11:32:53.087211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.571 [2024-11-19 11:32:53.087724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.571 [2024-11-19 11:32:53.087734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50f30 is same with the state(6) to be set 00:21:39.571 [2024-11-19 11:32:53.089014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:39.571 [2024-11-19 11:32:53.089035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:39.571 [2024-11-19 11:32:53.089046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:39.571 [2024-11-19 11:32:53.089412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.571 [2024-11-19 11:32:53.089434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17da610 with addr=10.0.0.2, port=4420 00:21:39.571 [2024-11-19 11:32:53.089444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17da610 is same with the state(6) to be set 00:21:39.571 [2024-11-19 11:32:53.089653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.571 [2024-11-19 11:32:53.089668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c5d50 with addr=10.0.0.2, port=4420 00:21:39.571 [2024-11-19 11:32:53.089677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c5d50 is same with the state(6) to be set 00:21:39.571 [2024-11-19 11:32:53.089901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.571 [2024-11-19 11:32:53.089915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3cc40 with addr=10.0.0.2, port=4420 00:21:39.571 [2024-11-19 11:32:53.089924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc40 is same with the state(6) to be set 00:21:39.571 [2024-11-19 11:32:53.090523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:39.571 [2024-11-19 11:32:53.090539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:39.571 [2024-11-19 11:32:53.090566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17da610 (9): Bad file descriptor 00:21:39.571 [2024-11-19 11:32:53.090578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c5d50 (9): Bad file descriptor 00:21:39.571 [2024-11-19 11:32:53.090589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3cc40 (9): Bad file descriptor 00:21:39.571 [2024-11-19 11:32:53.090827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.571 [2024-11-19 11:32:53.090844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c61b0 with addr=10.0.0.2, port=4420 00:21:39.571 [2024-11-19 11:32:53.090853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c61b0 is same with the state(6) to be set 00:21:39.571 [2024-11-19 11:32:53.091054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.571 [2024-11-19 11:32:53.091069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf07a0 with addr=10.0.0.2, port=4420 00:21:39.571 [2024-11-19 11:32:53.091078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf07a0 is same with the state(6) to be set 00:21:39.571 [2024-11-19 11:32:53.091087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:39.571 [2024-11-19 11:32:53.091095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:39.571 [2024-11-19 11:32:53.091105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:39.571 [2024-11-19 11:32:53.091118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:39.571 [2024-11-19 11:32:53.091127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:39.572 [2024-11-19 11:32:53.091135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:39.572 [2024-11-19 11:32:53.091142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:39.572 [2024-11-19 11:32:53.091150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:39.572 [2024-11-19 11:32:53.091158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:39.572 [2024-11-19 11:32:53.091166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:39.572 [2024-11-19 11:32:53.091174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:39.572 [2024-11-19 11:32:53.091181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:39.572 [2024-11-19 11:32:53.091234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c61b0 (9): Bad file descriptor 00:21:39.572 [2024-11-19 11:32:53.091247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf07a0 (9): Bad file descriptor 00:21:39.572 [2024-11-19 11:32:53.091282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:39.572 [2024-11-19 11:32:53.091290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:39.572 [2024-11-19 11:32:53.091298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:39.572 [2024-11-19 11:32:53.091306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:39.572 [2024-11-19 11:32:53.091315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:39.572 [2024-11-19 11:32:53.091322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:39.572 [2024-11-19 11:32:53.091330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:39.572 [2024-11-19 11:32:53.091338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:39.572 [2024-11-19 11:32:53.093802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.093831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.093846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.093861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.093876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.093897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.093911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.093926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.093940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.093958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.093972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.093987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.093993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.572 [2024-11-19 11:32:53.094287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.572 [2024-11-19 11:32:53.094293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.094765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.094772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb6e0 is same with the state(6) to be set 00:21:39.573 [2024-11-19 11:32:53.095789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.095802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.573 [2024-11-19 11:32:53.095812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.573 [2024-11-19 11:32:53.095820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.095828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.095837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.095847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.095853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.095862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.095869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.095877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.095884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.095892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.095898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.095907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.095913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.095921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.095927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.095936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.095942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.095956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.095964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.095972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.095978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.095987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.095993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.574 [2024-11-19 11:32:53.096408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.574 [2024-11-19 11:32:53.096416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.096744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.096751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc9ed0 is same with the state(6) to be set 00:21:39.575 [2024-11-19 11:32:53.097756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.097988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.097996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.575 [2024-11-19 11:32:53.098002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.575 [2024-11-19 11:32:53.098011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.576 [2024-11-19 11:32:53.098594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.576 [2024-11-19 11:32:53.098600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.098609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.098615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.098623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.098629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.098638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.098644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.098652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.098658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.098667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.098673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.098681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.098687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.098695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.098702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.098710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.098718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.098725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb450 is same with the state(6) to be set 00:21:39.577 [2024-11-19 11:32:53.099725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.099986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.099992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.100001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.100007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.100015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.100021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.100029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.100038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.100046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.100052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.100060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.100067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.100075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.100082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.100091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.100098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.100106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.100113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.577 [2024-11-19 11:32:53.100121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.577 [2024-11-19 11:32:53.100128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.100684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.100692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2c15410 is same with the state(6) to be set 00:21:39.578 [2024-11-19 11:32:53.101692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.101704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.578 [2024-11-19 11:32:53.101715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.578 [2024-11-19 11:32:53.101722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.101986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.101992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.579 [2024-11-19 11:32:53.102304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.579 [2024-11-19 11:32:53.102311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.580 [2024-11-19 11:32:53.102643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.580 [2024-11-19 11:32:53.102651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fa60 is same with the state(6) to be set 00:21:39.580 [2024-11-19 11:32:53.103627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:39.580 [2024-11-19 11:32:53.103642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:39.580 [2024-11-19 11:32:53.103650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:39.580 [2024-11-19 11:32:53.103659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:39.580 task offset: 27136 on job bdev=Nvme1n1 fails 00:21:39.580 00:21:39.580 Latency(us) 00:21:39.580 [2024-11-19T10:32:53.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.580 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.580 Job: Nvme1n1 ended in about 0.75 seconds with error 00:21:39.580 Verification LBA range: start 0x0 length 0x400 00:21:39.580 Nvme1n1 : 0.75 256.30 16.02 85.43 0.00 184834.84 2735.42 219745.06 00:21:39.580 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.580 Job: Nvme2n1 ended in about 0.79 seconds with error 00:21:39.580 Verification LBA range: start 0x0 length 0x400 00:21:39.580 Nvme2n1 : 0.79 162.95 10.18 81.47 0.00 253298.72 23706.94 246187.41 00:21:39.580 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.580 Job: Nvme3n1 ended in about 0.80 seconds with error 00:21:39.580 Verification LBA range: start 0x0 length 0x400 00:21:39.580 Nvme3n1 : 0.80 161.00 10.06 80.50 0.00 251163.75 28835.84 216097.84 00:21:39.580 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.580 Job: Nvme4n1 ended in about 0.78 seconds with error 00:21:39.580 Verification LBA range: start 0x0 length 0x400 00:21:39.580 Nvme4n1 : 0.78 247.68 15.48 82.56 0.00 179353.15 15614.66 207891.59 00:21:39.580 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.580 Job: Nvme5n1 ended in about 0.80 seconds with error 00:21:39.580 Verification LBA range: start 0x0 length 0x400 00:21:39.580 Nvme5n1 : 0.80 160.60 10.04 80.30 0.00 241202.46 16184.54 218833.25 00:21:39.580 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.580 Job: Nvme6n1 ended in about 0.80 seconds with error 00:21:39.580 Verification LBA range: start 0x0 length 0x400 00:21:39.580 Nvme6n1 : 0.80 160.20 10.01 80.10 0.00 236636.01 18919.96 220656.86 00:21:39.580 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.580 Job: Nvme7n1 ended in about 0.75 seconds with error 00:21:39.580 Verification LBA range: start 0x0 length 0x400 00:21:39.580 Nvme7n1 : 0.75 254.46 15.90 84.82 0.00 162318.11 2179.78 215186.03 00:21:39.580 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.580 Job: Nvme8n1 ended in about 0.80 seconds with error 00:21:39.580 Verification LBA range: start 0x0 length 0x400 00:21:39.580 Nvme8n1 : 0.80 159.81 9.99 79.91 0.00 226616.84 14132.98 222480.47 00:21:39.580 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.580 Job: Nvme9n1 ended in about 0.80 seconds with error 00:21:39.580 Verification LBA range: start 0x0 length 0x400 00:21:39.580 Nvme9n1 : 0.80 159.42 9.96 79.71 0.00 222121.63 18350.08 222480.47 00:21:39.580 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:39.580 Job: Nvme10n1 ended in about 0.79 seconds with error 00:21:39.580 Verification LBA range: start 0x0 length 0x400 00:21:39.580 Nvme10n1 : 0.79 162.41 10.15 81.21 0.00 211933.87 18578.03 242540.19 00:21:39.580 [2024-11-19T10:32:53.361Z] =================================================================================================================== 00:21:39.580 [2024-11-19T10:32:53.361Z] Total : 1884.83 117.80 816.01 0.00 213180.13 2179.78 246187.41 00:21:39.580 [2024-11-19 11:32:53.133828] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:39.580 [2024-11-19 11:32:53.133878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:39.580 [2024-11-19 11:32:53.134241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.580 [2024-11-19 11:32:53.134261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c3c70 with addr=10.0.0.2, port=4420 00:21:39.581 [2024-11-19 11:32:53.134271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3c70 is same with the state(6) to be set 00:21:39.581 [2024-11-19 11:32:53.134413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.581 [2024-11-19 11:32:53.134424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf14c0 with addr=10.0.0.2, port=4420 00:21:39.581 [2024-11-19 11:32:53.134431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf14c0 is same with the state(6) to be set 00:21:39.581 [2024-11-19 11:32:53.134570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.581 [2024-11-19 11:32:53.134580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce7300 with addr=10.0.0.2, port=4420 00:21:39.581 [2024-11-19 11:32:53.134588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7300 is same with the state(6) to be set 00:21:39.581 [2024-11-19 11:32:53.134672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.581 [2024-11-19 11:32:53.134683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d03590 with addr=10.0.0.2, port=4420 00:21:39.581 [2024-11-19 11:32:53.134690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d03590 is same with the state(6) to be set 00:21:39.581 [2024-11-19 11:32:53.135870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:39.581 [2024-11-19 11:32:53.135888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:39.581 [2024-11-19 11:32:53.135897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:39.581 [2024-11-19 11:32:53.135905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:39.581 [2024-11-19 11:32:53.135914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:39.581 [2024-11-19 11:32:53.136216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.581 [2024-11-19 11:32:53.136231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d36140 with addr=10.0.0.2, port=4420 00:21:39.581 [2024-11-19 11:32:53.136244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d36140 is same with the state(6) to be set 00:21:39.581 [2024-11-19 11:32:53.136256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c3c70 (9): Bad file descriptor 00:21:39.581 [2024-11-19 11:32:53.136269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf14c0 (9): Bad file descriptor 00:21:39.581 [2024-11-19 11:32:53.136278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce7300 (9): Bad file descriptor 00:21:39.581 [2024-11-19 11:32:53.136287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d03590 (9): Bad file descriptor 00:21:39.581 [2024-11-19 11:32:53.136318] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:39.581 [2024-11-19 11:32:53.136329] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:39.581 [2024-11-19 11:32:53.136339] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:39.581 [2024-11-19 11:32:53.136349] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:39.581 [2024-11-19 11:32:53.136855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.581 [2024-11-19 11:32:53.136874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3cc40 with addr=10.0.0.2, port=4420 00:21:39.581 [2024-11-19 11:32:53.136882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cc40 is same with the state(6) to be set 00:21:39.581 [2024-11-19 11:32:53.137010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.581 [2024-11-19 11:32:53.137022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c5d50 with addr=10.0.0.2, port=4420 00:21:39.581 [2024-11-19 11:32:53.137029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c5d50 is same with the state(6) to be set 00:21:39.581 [2024-11-19 11:32:53.137167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.581 [2024-11-19 11:32:53.137177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17da610 with addr=10.0.0.2, port=4420 00:21:39.581 [2024-11-19 11:32:53.137185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17da610 is same with the state(6) to be set 00:21:39.581 [2024-11-19 11:32:53.137319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.581 [2024-11-19 11:32:53.137329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf07a0 with addr=10.0.0.2, port=4420 00:21:39.581 [2024-11-19 11:32:53.137337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf07a0 is same with the state(6) to be set 00:21:39.581 [2024-11-19 11:32:53.137421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.581 [2024-11-19 11:32:53.137432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c61b0 with addr=10.0.0.2, port=4420 00:21:39.581 [2024-11-19 11:32:53.137439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c61b0 is same with the state(6) to be set 00:21:39.581 [2024-11-19 11:32:53.137449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d36140 (9): Bad file descriptor 00:21:39.581 [2024-11-19 11:32:53.137459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:39.581 [2024-11-19 11:32:53.137466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:39.581 [2024-11-19 11:32:53.137474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:39.581 [2024-11-19 11:32:53.137487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:39.581 [2024-11-19 11:32:53.137496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:39.581 [2024-11-19 11:32:53.137502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:39.581 [2024-11-19 11:32:53.137508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:39.581 [2024-11-19 11:32:53.137514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:39.581 [2024-11-19 11:32:53.137521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:39.581 [2024-11-19 11:32:53.137527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:39.581 [2024-11-19 11:32:53.137533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:39.581 [2024-11-19 11:32:53.137539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:39.581 [2024-11-19 11:32:53.137546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:39.581 [2024-11-19 11:32:53.137551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:39.581 [2024-11-19 11:32:53.137557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:39.581 [2024-11-19 11:32:53.137563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:39.581 [2024-11-19 11:32:53.138231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3cc40 (9): Bad file descriptor 00:21:39.581 [2024-11-19 11:32:53.138250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c5d50 (9): Bad file descriptor 00:21:39.581 [2024-11-19 11:32:53.138259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17da610 (9): Bad file descriptor 00:21:39.581 [2024-11-19 11:32:53.138267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf07a0 (9): Bad file descriptor 00:21:39.581 [2024-11-19 11:32:53.138275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c61b0 (9): Bad file descriptor 00:21:39.581 [2024-11-19 11:32:53.138283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:39.581 [2024-11-19 11:32:53.138289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:39.581 [2024-11-19 11:32:53.138296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:39.581 [2024-11-19 11:32:53.138302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:39.581 [2024-11-19 11:32:53.138328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:39.581 [2024-11-19 11:32:53.138336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:39.581 [2024-11-19 11:32:53.138342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:39.581 [2024-11-19 11:32:53.138347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:39.581 [2024-11-19 11:32:53.138354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:39.581 [2024-11-19 11:32:53.138360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:39.581 [2024-11-19 11:32:53.138369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:39.581 [2024-11-19 11:32:53.138375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:39.581 [2024-11-19 11:32:53.138382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:39.581 [2024-11-19 11:32:53.138388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:39.581 [2024-11-19 11:32:53.138394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:39.581 [2024-11-19 11:32:53.138400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:39.581 [2024-11-19 11:32:53.138406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:39.581 [2024-11-19 11:32:53.138412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:39.581 [2024-11-19 11:32:53.138418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:39.581 [2024-11-19 11:32:53.138424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:39.581 [2024-11-19 11:32:53.138431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:39.581 [2024-11-19 11:32:53.138437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:39.581 [2024-11-19 11:32:53.138443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:39.581 [2024-11-19 11:32:53.138449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:39.842 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2321627 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2321627 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2321627 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:40.781 rmmod nvme_tcp 00:21:40.781 rmmod nvme_fabrics 00:21:40.781 rmmod nvme_keyring 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2321342 ']' 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2321342 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2321342 ']' 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2321342 00:21:40.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2321342) - No such process 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2321342 is not found' 00:21:40.781 Process with pid 2321342 is not found 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.781 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.319 00:21:43.319 real 0m7.721s 00:21:43.319 user 0m18.834s 00:21:43.319 sys 0m1.299s 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.319 ************************************ 00:21:43.319 END TEST nvmf_shutdown_tc3 00:21:43.319 ************************************ 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:43.319 ************************************ 00:21:43.319 START TEST nvmf_shutdown_tc4 00:21:43.319 ************************************ 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:43.319 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:43.319 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.319 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:43.319 Found net devices under 0000:86:00.0: cvl_0_0 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:43.320 Found net devices under 0000:86:00.1: cvl_0_1 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:43.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:21:43.320 00:21:43.320 --- 10.0.0.2 ping statistics --- 00:21:43.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.320 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:21:43.320 00:21:43.320 --- 10.0.0.1 ping statistics --- 00:21:43.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.320 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2322787 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2322787 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2322787 ']' 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.320 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:43.320 [2024-11-19 11:32:57.048711] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:43.320 [2024-11-19 11:32:57.048755] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.580 [2024-11-19 11:32:57.126295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:43.580 [2024-11-19 11:32:57.168488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.580 [2024-11-19 11:32:57.168526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.580 [2024-11-19 11:32:57.168533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.580 [2024-11-19 11:32:57.168539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.580 [2024-11-19 11:32:57.168544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.580 [2024-11-19 11:32:57.170243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.580 [2024-11-19 11:32:57.170339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.580 [2024-11-19 11:32:57.170429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:43.580 [2024-11-19 11:32:57.170430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:43.580 [2024-11-19 11:32:57.306212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.580 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:43.839 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.839 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:43.839 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:43.839 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.839 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:43.839 Malloc1 00:21:43.839 [2024-11-19 11:32:57.413577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.839 Malloc2 00:21:43.839 Malloc3 00:21:43.839 Malloc4 00:21:43.839 Malloc5 00:21:43.839 Malloc6 00:21:44.098 Malloc7 00:21:44.098 Malloc8 00:21:44.098 Malloc9 00:21:44.098 Malloc10 00:21:44.098 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.098 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:44.098 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.098 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:44.098 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2322939 00:21:44.098 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:44.098 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:44.356 [2024-11-19 11:32:57.911467] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2322787 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2322787 ']' 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2322787 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2322787 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2322787' 00:21:49.668 killing process with pid 2322787 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2322787 00:21:49.668 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2322787 00:21:49.668 [2024-11-19 11:33:02.909788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd2e0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.910879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdca0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.910906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdca0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.910914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdca0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.911146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x754dc0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.911173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x754dc0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.911188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x754dc0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.911195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x754dc0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.911201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x754dc0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.911207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x754dc0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.911214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x754dc0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.911221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x754dc0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.911227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x754dc0 is same with the state(6) to be set 00:21:49.668 [2024-11-19 11:33:02.911233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x754dc0 is same with the state(6) to be set 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 [2024-11-19 11:33:02.921397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865b10 is same with the state(6) to be set 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 [2024-11-19 11:33:02.921424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865b10 is same with the state(6) to be set 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 [2024-11-19 11:33:02.921569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 [2024-11-19 11:33:02.922410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 starting I/O failed: -6 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.668 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 [2024-11-19 11:33:02.923581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 [2024-11-19 11:33:02.923852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1250 is same with tWrite completed with error (sct=0, sc=8) 00:21:49.669 he state(6) to be set 00:21:49.669 starting I/O failed: -6 00:21:49.669 [2024-11-19 11:33:02.923878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1250 is same with the state(6) to be set 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 [2024-11-19 11:33:02.923887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1250 is same with the state(6) to be set 00:21:49.669 starting I/O failed: -6 00:21:49.669 [2024-11-19 11:33:02.923894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1250 is same with the state(6) to be set 00:21:49.669 [2024-11-19 11:33:02.923901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1250 is same with the state(6) to be set 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 [2024-11-19 11:33:02.923907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1250 is same with the state(6) to be set 00:21:49.669 starting I/O failed: -6 00:21:49.669 [2024-11-19 11:33:02.923914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1250 is same with the state(6) to be set 00:21:49.669 [2024-11-19 11:33:02.923921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1250 is same with the state(6) to be set 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 [2024-11-19 11:33:02.923927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1250 is same with the state(6) to be set 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.669 [2024-11-19 11:33:02.925218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.669 NVMe io qpair process completion error 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 Write completed with error (sct=0, sc=8) 00:21:49.669 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 [2024-11-19 11:33:02.926200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 [2024-11-19 11:33:02.926445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863f60 is same with the state(6) to be set 00:21:49.670 [2024-11-19 11:33:02.926459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863f60 is same with tWrite completed with error (sct=0, sc=8) 00:21:49.670 he state(6) to be set 00:21:49.670 [2024-11-19 11:33:02.926466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863f60 is same with the state(6) to be set 00:21:49.670 [2024-11-19 11:33:02.926473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863f60 is same with tWrite completed with error (sct=0, sc=8) 00:21:49.670 he state(6) to be set 00:21:49.670 starting I/O failed: -6 00:21:49.670 [2024-11-19 11:33:02.926480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863f60 is same with the state(6) to be set 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 [2024-11-19 11:33:02.927015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1c10 is same with the state(6) to be set 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 [2024-11-19 11:33:02.927037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1c10 is same with the state(6) to be set 00:21:49.670 [2024-11-19 11:33:02.927045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1c10 is same with the state(6) to be set 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 [2024-11-19 11:33:02.927052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1c10 is same with the state(6) to be set 00:21:49.670 starting I/O failed: -6 00:21:49.670 [2024-11-19 11:33:02.927058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1c10 is same with the state(6) to be set 00:21:49.670 [2024-11-19 11:33:02.927065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1c10 is same with the state(6) to be set 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 [2024-11-19 11:33:02.927072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1c10 is same with the state(6) to be set 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 [2024-11-19 11:33:02.927147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 [2024-11-19 11:33:02.927362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864ca0 is same with the state(6) to be set 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 [2024-11-19 11:33:02.927384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864ca0 is same with the state(6) to be set 00:21:49.670 [2024-11-19 11:33:02.927391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864ca0 is same with the state(6) to be set 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 [2024-11-19 11:33:02.927398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864ca0 is same with the state(6) to be set 00:21:49.670 [2024-11-19 11:33:02.927404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864ca0 is same with the state(6) to be set 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 [2024-11-19 11:33:02.927410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864ca0 is same with the state(6) to be set 00:21:49.670 starting I/O failed: -6 00:21:49.670 [2024-11-19 11:33:02.927417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864ca0 is same with the state(6) to be set 00:21:49.670 [2024-11-19 11:33:02.927424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864ca0 is same with the state(6) to be set 00:21:49.670 [2024-11-19 11:33:02.927430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864ca0 is same with tWrite completed with error (sct=0, sc=8) 00:21:49.670 he state(6) to be set 00:21:49.670 starting I/O failed: -6 00:21:49.670 [2024-11-19 11:33:02.927438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x864ca0 is same with the state(6) to be set 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.670 Write completed with error (sct=0, sc=8) 00:21:49.670 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 [2024-11-19 11:33:02.928149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 [2024-11-19 11:33:02.929806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:49.671 NVMe io qpair process completion error 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 starting I/O failed: -6 00:21:49.671 [2024-11-19 11:33:02.930708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.671 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 [2024-11-19 11:33:02.931579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 [2024-11-19 11:33:02.932653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.672 starting I/O failed: -6 00:21:49.672 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 [2024-11-19 11:33:02.934616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:49.673 NVMe io qpair process completion error 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 [2024-11-19 11:33:02.935764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 [2024-11-19 11:33:02.936649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 Write completed with error (sct=0, sc=8) 00:21:49.673 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 [2024-11-19 11:33:02.937678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:49.674 starting I/O failed: -6 00:21:49.674 starting I/O failed: -6 00:21:49.674 starting I/O failed: -6 00:21:49.674 starting I/O failed: -6 00:21:49.674 starting I/O failed: -6 00:21:49.674 starting I/O failed: -6 00:21:49.674 starting I/O failed: -6 00:21:49.674 starting I/O failed: -6 00:21:49.674 starting I/O failed: -6 00:21:49.674 starting I/O failed: -6 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 [2024-11-19 11:33:02.939640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:49.674 NVMe io qpair process completion error 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 starting I/O failed: -6 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.674 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.675 Write completed with error (sct=0, sc=8) 00:21:49.675 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 [2024-11-19 11:33:02.947218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.676 NVMe io qpair process completion error 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 [2024-11-19 11:33:02.948229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 [2024-11-19 11:33:02.949118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.676 starting I/O failed: -6 00:21:49.676 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 [2024-11-19 11:33:02.950144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 [2024-11-19 11:33:02.952195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.677 NVMe io qpair process completion error 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 starting I/O failed: -6 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.677 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 [2024-11-19 11:33:02.953198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 [2024-11-19 11:33:02.954113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 [2024-11-19 11:33:02.955119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.678 Write completed with error (sct=0, sc=8) 00:21:49.678 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 [2024-11-19 11:33:02.956997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.679 NVMe io qpair process completion error 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 [2024-11-19 11:33:02.958054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 [2024-11-19 11:33:02.958934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:49.679 starting I/O failed: -6 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.679 starting I/O failed: -6 00:21:49.679 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 [2024-11-19 11:33:02.959990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 [2024-11-19 11:33:02.962405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.680 NVMe io qpair process completion error 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 starting I/O failed: -6 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.680 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 [2024-11-19 11:33:02.963375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 [2024-11-19 11:33:02.964254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 [2024-11-19 11:33:02.965284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.681 starting I/O failed: -6 00:21:49.681 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 [2024-11-19 11:33:02.969391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.682 NVMe io qpair process completion error 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 [2024-11-19 11:33:02.970404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 Write completed with error (sct=0, sc=8) 00:21:49.682 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 [2024-11-19 11:33:02.971202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 [2024-11-19 11:33:02.972231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 Write completed with error (sct=0, sc=8) 00:21:49.683 starting I/O failed: -6 00:21:49.683 [2024-11-19 11:33:02.974639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.684 NVMe io qpair process completion error 00:21:49.684 Initializing NVMe Controllers 00:21:49.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:49.684 Controller IO queue size 128, less than required. 00:21:49.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.684 Controller IO queue size 128, less than required. 00:21:49.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:49.684 Controller IO queue size 128, less than required. 00:21:49.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:49.684 Controller IO queue size 128, less than required. 00:21:49.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:49.684 Controller IO queue size 128, less than required. 00:21:49.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:49.684 Controller IO queue size 128, less than required. 00:21:49.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:49.684 Controller IO queue size 128, less than required. 00:21:49.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:49.684 Controller IO queue size 128, less than required. 00:21:49.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:49.684 Controller IO queue size 128, less than required. 00:21:49.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:49.684 Controller IO queue size 128, less than required. 00:21:49.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:49.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:49.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:49.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:49.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:49.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:49.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:49.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:49.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:49.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:49.684 Initialization complete. Launching workers. 00:21:49.684 ======================================================== 00:21:49.684 Latency(us) 00:21:49.684 Device Information : IOPS MiB/s Average min max 00:21:49.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2210.03 94.96 57925.96 900.72 101579.84 00:21:49.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2149.42 92.36 59569.09 722.64 117453.13 00:21:49.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2160.14 92.82 59327.44 924.17 115466.56 00:21:49.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2137.82 91.86 59963.61 834.43 111663.28 00:21:49.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2157.30 92.70 59436.93 883.71 113750.53 00:21:49.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2137.82 91.86 60003.89 825.98 111084.02 00:21:49.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2179.62 93.66 58891.19 838.83 110458.14 00:21:49.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2190.56 94.13 57928.72 684.91 109691.42 00:21:49.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2131.03 91.57 59557.53 696.97 108756.71 00:21:49.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2122.72 91.21 59800.51 883.35 108794.68 00:21:49.684 ======================================================== 00:21:49.684 Total : 21576.45 927.11 59232.16 684.91 117453.13 00:21:49.684 00:21:49.684 [2024-11-19 11:33:02.977616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936ef0 is same with the state(6) to be set 00:21:49.684 [2024-11-19 11:33:02.977666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1938720 is same with the state(6) to be set 00:21:49.684 [2024-11-19 11:33:02.977697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1937a70 is same with the state(6) to be set 00:21:49.684 [2024-11-19 11:33:02.977727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936890 is same with the state(6) to be set 00:21:49.684 [2024-11-19 11:33:02.977755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936bc0 is same with the state(6) to be set 00:21:49.684 [2024-11-19 11:33:02.977783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1937410 is same with the state(6) to be set 00:21:49.684 [2024-11-19 11:33:02.977811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1937740 is same with the state(6) to be set 00:21:49.684 [2024-11-19 11:33:02.977840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936560 is same with the state(6) to be set 00:21:49.684 [2024-11-19 11:33:02.977868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1938900 is same with the state(6) to be set 00:21:49.684 [2024-11-19 11:33:02.977898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1938ae0 is same with the state(6) to be set 00:21:49.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:49.684 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2322939 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2322939 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2322939 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.623 rmmod nvme_tcp 00:21:50.623 rmmod nvme_fabrics 00:21:50.623 rmmod nvme_keyring 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2322787 ']' 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2322787 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2322787 ']' 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2322787 00:21:50.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2322787) - No such process 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2322787 is not found' 00:21:50.623 Process with pid 2322787 is not found 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:50.623 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:50.624 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.624 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.624 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.624 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.624 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.624 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.624 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.161 00:21:53.161 real 0m9.759s 00:21:53.161 user 0m24.950s 00:21:53.161 sys 0m5.095s 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:53.161 ************************************ 00:21:53.161 END TEST nvmf_shutdown_tc4 00:21:53.161 ************************************ 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:53.161 00:21:53.161 real 0m41.102s 00:21:53.161 user 1m41.416s 00:21:53.161 sys 0m13.911s 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:53.161 ************************************ 00:21:53.161 END TEST nvmf_shutdown 00:21:53.161 ************************************ 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:53.161 ************************************ 00:21:53.161 START TEST nvmf_nsid 00:21:53.161 ************************************ 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:53.161 * Looking for test storage... 00:21:53.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:53.161 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:53.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.162 --rc genhtml_branch_coverage=1 00:21:53.162 --rc genhtml_function_coverage=1 00:21:53.162 --rc genhtml_legend=1 00:21:53.162 --rc geninfo_all_blocks=1 00:21:53.162 --rc geninfo_unexecuted_blocks=1 00:21:53.162 00:21:53.162 ' 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:53.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.162 --rc genhtml_branch_coverage=1 00:21:53.162 --rc genhtml_function_coverage=1 00:21:53.162 --rc genhtml_legend=1 00:21:53.162 --rc geninfo_all_blocks=1 00:21:53.162 --rc geninfo_unexecuted_blocks=1 00:21:53.162 00:21:53.162 ' 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:53.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.162 --rc genhtml_branch_coverage=1 00:21:53.162 --rc genhtml_function_coverage=1 00:21:53.162 --rc genhtml_legend=1 00:21:53.162 --rc geninfo_all_blocks=1 00:21:53.162 --rc geninfo_unexecuted_blocks=1 00:21:53.162 00:21:53.162 ' 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:53.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.162 --rc genhtml_branch_coverage=1 00:21:53.162 --rc genhtml_function_coverage=1 00:21:53.162 --rc genhtml_legend=1 00:21:53.162 --rc geninfo_all_blocks=1 00:21:53.162 --rc geninfo_unexecuted_blocks=1 00:21:53.162 00:21:53.162 ' 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:53.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.162 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.163 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.163 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.163 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.163 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:53.163 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:53.163 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.163 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:59.736 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:59.736 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:59.736 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:59.737 Found net devices under 0000:86:00.0: cvl_0_0 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:59.737 Found net devices under 0000:86:00.1: cvl_0_1 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:59.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:21:59.737 00:21:59.737 --- 10.0.0.2 ping statistics --- 00:21:59.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.737 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:21:59.737 00:21:59.737 --- 10.0.0.1 ping statistics --- 00:21:59.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.737 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2327922 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2327922 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2327922 ']' 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:59.737 [2024-11-19 11:33:12.741811] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:59.737 [2024-11-19 11:33:12.741857] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.737 [2024-11-19 11:33:12.819139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.737 [2024-11-19 11:33:12.860330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.737 [2024-11-19 11:33:12.860368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.737 [2024-11-19 11:33:12.860375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.737 [2024-11-19 11:33:12.860381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.737 [2024-11-19 11:33:12.860386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.737 [2024-11-19 11:33:12.860946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2328112 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.737 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:59.738 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:59.738 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:59.738 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:59.738 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=5cb59437-154f-4ff7-b4d1-d4b8b82babed 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=fff2614f-ab84-4f3e-9cb2-18fc1904fb14 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2bec8ed6-fd63-4d49-838d-e711d855eea9 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:59.738 null0 00:21:59.738 null1 00:21:59.738 [2024-11-19 11:33:13.043393] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:59.738 [2024-11-19 11:33:13.043440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2328112 ] 00:21:59.738 null2 00:21:59.738 [2024-11-19 11:33:13.048739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.738 [2024-11-19 11:33:13.072936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2328112 /var/tmp/tgt2.sock 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2328112 ']' 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:59.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:59.738 [2024-11-19 11:33:13.120267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.738 [2024-11-19 11:33:13.161640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:59.738 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:59.996 [2024-11-19 11:33:13.685871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.996 [2024-11-19 11:33:13.701987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:59.996 nvme0n1 nvme0n2 00:21:59.996 nvme1n1 00:21:59.996 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:59.996 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:59.996 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:01.375 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 5cb59437-154f-4ff7-b4d1-d4b8b82babed 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5cb59437154f4ff7b4d1d4b8b82babed 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5CB59437154F4FF7B4D1D4B8B82BABED 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 5CB59437154F4FF7B4D1D4B8B82BABED == \5\C\B\5\9\4\3\7\1\5\4\F\4\F\F\7\B\4\D\1\D\4\B\8\B\8\2\B\A\B\E\D ]] 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid fff2614f-ab84-4f3e-9cb2-18fc1904fb14 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fff2614fab844f3e9cb218fc1904fb14 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FFF2614FAB844F3E9CB218FC1904FB14 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ FFF2614FAB844F3E9CB218FC1904FB14 == \F\F\F\2\6\1\4\F\A\B\8\4\4\F\3\E\9\C\B\2\1\8\F\C\1\9\0\4\F\B\1\4 ]] 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:02.313 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:02.313 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2bec8ed6-fd63-4d49-838d-e711d855eea9 00:22:02.313 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:02.313 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:02.313 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:02.313 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:02.313 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:02.313 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2bec8ed6fd634d49838de711d855eea9 00:22:02.313 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2BEC8ED6FD634D49838DE711D855EEA9 00:22:02.313 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2BEC8ED6FD634D49838DE711D855EEA9 == \2\B\E\C\8\E\D\6\F\D\6\3\4\D\4\9\8\3\8\D\E\7\1\1\D\8\5\5\E\E\A\9 ]] 00:22:02.313 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2328112 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2328112 ']' 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2328112 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2328112 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2328112' 00:22:02.573 killing process with pid 2328112 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2328112 00:22:02.573 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2328112 00:22:02.832 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:02.832 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:02.832 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:02.832 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:02.832 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:02.832 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:02.832 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:02.832 rmmod nvme_tcp 00:22:03.091 rmmod nvme_fabrics 00:22:03.091 rmmod nvme_keyring 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2327922 ']' 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2327922 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2327922 ']' 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2327922 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2327922 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2327922' 00:22:03.091 killing process with pid 2327922 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2327922 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2327922 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:03.091 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:03.350 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:03.350 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:03.350 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.350 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.350 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.334 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:05.334 00:22:05.334 real 0m12.365s 00:22:05.334 user 0m9.689s 00:22:05.334 sys 0m5.454s 00:22:05.334 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.334 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.334 ************************************ 00:22:05.334 END TEST nvmf_nsid 00:22:05.334 ************************************ 00:22:05.334 11:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:05.334 00:22:05.334 real 12m1.419s 00:22:05.334 user 25m47.305s 00:22:05.334 sys 3m43.783s 00:22:05.334 11:33:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.334 11:33:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:05.334 ************************************ 00:22:05.334 END TEST nvmf_target_extra 00:22:05.334 ************************************ 00:22:05.334 11:33:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:05.334 11:33:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:05.334 11:33:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.334 11:33:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:05.334 ************************************ 00:22:05.334 START TEST nvmf_host 00:22:05.335 ************************************ 00:22:05.335 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:05.601 * Looking for test storage... 00:22:05.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:05.601 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:05.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.602 --rc genhtml_branch_coverage=1 00:22:05.602 --rc genhtml_function_coverage=1 00:22:05.602 --rc genhtml_legend=1 00:22:05.602 --rc geninfo_all_blocks=1 00:22:05.602 --rc geninfo_unexecuted_blocks=1 00:22:05.602 00:22:05.602 ' 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:05.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.602 --rc genhtml_branch_coverage=1 00:22:05.602 --rc genhtml_function_coverage=1 00:22:05.602 --rc genhtml_legend=1 00:22:05.602 --rc geninfo_all_blocks=1 00:22:05.602 --rc geninfo_unexecuted_blocks=1 00:22:05.602 00:22:05.602 ' 00:22:05.602 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:05.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.602 --rc genhtml_branch_coverage=1 00:22:05.602 --rc genhtml_function_coverage=1 00:22:05.603 --rc genhtml_legend=1 00:22:05.603 --rc geninfo_all_blocks=1 00:22:05.603 --rc geninfo_unexecuted_blocks=1 00:22:05.603 00:22:05.603 ' 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:05.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.603 --rc genhtml_branch_coverage=1 00:22:05.603 --rc genhtml_function_coverage=1 00:22:05.603 --rc genhtml_legend=1 00:22:05.603 --rc geninfo_all_blocks=1 00:22:05.603 --rc geninfo_unexecuted_blocks=1 00:22:05.603 00:22:05.603 ' 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.603 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.604 11:33:19 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.605 11:33:19 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:05.606 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:05.607 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.607 11:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.607 ************************************ 00:22:05.607 START TEST nvmf_multicontroller 00:22:05.607 ************************************ 00:22:05.607 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:05.870 * Looking for test storage... 00:22:05.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.870 --rc genhtml_branch_coverage=1 00:22:05.870 --rc genhtml_function_coverage=1 00:22:05.870 --rc genhtml_legend=1 00:22:05.870 --rc geninfo_all_blocks=1 00:22:05.870 --rc geninfo_unexecuted_blocks=1 00:22:05.870 00:22:05.870 ' 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.870 --rc genhtml_branch_coverage=1 00:22:05.870 --rc genhtml_function_coverage=1 00:22:05.870 --rc genhtml_legend=1 00:22:05.870 --rc geninfo_all_blocks=1 00:22:05.870 --rc geninfo_unexecuted_blocks=1 00:22:05.870 00:22:05.870 ' 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.870 --rc genhtml_branch_coverage=1 00:22:05.870 --rc genhtml_function_coverage=1 00:22:05.870 --rc genhtml_legend=1 00:22:05.870 --rc geninfo_all_blocks=1 00:22:05.870 --rc geninfo_unexecuted_blocks=1 00:22:05.870 00:22:05.870 ' 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.870 --rc genhtml_branch_coverage=1 00:22:05.870 --rc genhtml_function_coverage=1 00:22:05.870 --rc genhtml_legend=1 00:22:05.870 --rc geninfo_all_blocks=1 00:22:05.870 --rc geninfo_unexecuted_blocks=1 00:22:05.870 00:22:05.870 ' 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.870 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:05.871 11:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:12.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:12.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.444 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:12.444 Found net devices under 0000:86:00.0: cvl_0_0 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:12.445 Found net devices under 0000:86:00.1: cvl_0_1 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:22:12.445 00:22:12.445 --- 10.0.0.2 ping statistics --- 00:22:12.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.445 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:12.445 00:22:12.445 --- 10.0.0.1 ping statistics --- 00:22:12.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.445 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2332253 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2332253 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2332253 ']' 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.445 [2024-11-19 11:33:25.495698] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:12.445 [2024-11-19 11:33:25.495746] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.445 [2024-11-19 11:33:25.575544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:12.445 [2024-11-19 11:33:25.618736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.445 [2024-11-19 11:33:25.618769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.445 [2024-11-19 11:33:25.618776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.445 [2024-11-19 11:33:25.618782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.445 [2024-11-19 11:33:25.618787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.445 [2024-11-19 11:33:25.620196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.445 [2024-11-19 11:33:25.620311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.445 [2024-11-19 11:33:25.620312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.445 [2024-11-19 11:33:25.767891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.445 Malloc0 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.445 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.446 [2024-11-19 11:33:25.833996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.446 [2024-11-19 11:33:25.841917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.446 Malloc1 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2332435 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2332435 /var/tmp/bdevperf.sock 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2332435 ']' 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.446 11:33:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.446 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.446 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:12.446 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:12.446 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.446 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.705 NVMe0n1 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.706 1 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.706 request: 00:22:12.706 { 00:22:12.706 "name": "NVMe0", 00:22:12.706 "trtype": "tcp", 00:22:12.706 "traddr": "10.0.0.2", 00:22:12.706 "adrfam": "ipv4", 00:22:12.706 "trsvcid": "4420", 00:22:12.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.706 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:12.706 "hostaddr": "10.0.0.1", 00:22:12.706 "prchk_reftag": false, 00:22:12.706 "prchk_guard": false, 00:22:12.706 "hdgst": false, 00:22:12.706 "ddgst": false, 00:22:12.706 "allow_unrecognized_csi": false, 00:22:12.706 "method": "bdev_nvme_attach_controller", 00:22:12.706 "req_id": 1 00:22:12.706 } 00:22:12.706 Got JSON-RPC error response 00:22:12.706 response: 00:22:12.706 { 00:22:12.706 "code": -114, 00:22:12.706 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:12.706 } 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.706 request: 00:22:12.706 { 00:22:12.706 "name": "NVMe0", 00:22:12.706 "trtype": "tcp", 00:22:12.706 "traddr": "10.0.0.2", 00:22:12.706 "adrfam": "ipv4", 00:22:12.706 "trsvcid": "4420", 00:22:12.706 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:12.706 "hostaddr": "10.0.0.1", 00:22:12.706 "prchk_reftag": false, 00:22:12.706 "prchk_guard": false, 00:22:12.706 "hdgst": false, 00:22:12.706 "ddgst": false, 00:22:12.706 "allow_unrecognized_csi": false, 00:22:12.706 "method": "bdev_nvme_attach_controller", 00:22:12.706 "req_id": 1 00:22:12.706 } 00:22:12.706 Got JSON-RPC error response 00:22:12.706 response: 00:22:12.706 { 00:22:12.706 "code": -114, 00:22:12.706 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:12.706 } 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.706 request: 00:22:12.706 { 00:22:12.706 "name": "NVMe0", 00:22:12.706 "trtype": "tcp", 00:22:12.706 "traddr": "10.0.0.2", 00:22:12.706 "adrfam": "ipv4", 00:22:12.706 "trsvcid": "4420", 00:22:12.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.706 "hostaddr": "10.0.0.1", 00:22:12.706 "prchk_reftag": false, 00:22:12.706 "prchk_guard": false, 00:22:12.706 "hdgst": false, 00:22:12.706 "ddgst": false, 00:22:12.706 "multipath": "disable", 00:22:12.706 "allow_unrecognized_csi": false, 00:22:12.706 "method": "bdev_nvme_attach_controller", 00:22:12.706 "req_id": 1 00:22:12.706 } 00:22:12.706 Got JSON-RPC error response 00:22:12.706 response: 00:22:12.706 { 00:22:12.706 "code": -114, 00:22:12.706 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:12.706 } 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.706 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.706 request: 00:22:12.706 { 00:22:12.706 "name": "NVMe0", 00:22:12.706 "trtype": "tcp", 00:22:12.706 "traddr": "10.0.0.2", 00:22:12.706 "adrfam": "ipv4", 00:22:12.706 "trsvcid": "4420", 00:22:12.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.706 "hostaddr": "10.0.0.1", 00:22:12.706 "prchk_reftag": false, 00:22:12.706 "prchk_guard": false, 00:22:12.706 "hdgst": false, 00:22:12.706 "ddgst": false, 00:22:12.707 "multipath": "failover", 00:22:12.707 "allow_unrecognized_csi": false, 00:22:12.707 "method": "bdev_nvme_attach_controller", 00:22:12.707 "req_id": 1 00:22:12.707 } 00:22:12.707 Got JSON-RPC error response 00:22:12.707 response: 00:22:12.707 { 00:22:12.707 "code": -114, 00:22:12.966 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:12.966 } 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.966 NVMe0n1 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.966 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.226 00:22:13.226 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.226 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.226 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:13.226 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.226 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.226 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.226 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:13.226 11:33:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.164 { 00:22:14.164 "results": [ 00:22:14.164 { 00:22:14.164 "job": "NVMe0n1", 00:22:14.164 "core_mask": "0x1", 00:22:14.164 "workload": "write", 00:22:14.164 "status": "finished", 00:22:14.164 "queue_depth": 128, 00:22:14.164 "io_size": 4096, 00:22:14.164 "runtime": 1.005882, 00:22:14.164 "iops": 24495.91502780644, 00:22:14.164 "mibps": 95.68716807736891, 00:22:14.164 "io_failed": 0, 00:22:14.164 "io_timeout": 0, 00:22:14.164 "avg_latency_us": 5213.231412761153, 00:22:14.164 "min_latency_us": 3148.5773913043477, 00:22:14.164 "max_latency_us": 10884.674782608696 00:22:14.164 } 00:22:14.164 ], 00:22:14.164 "core_count": 1 00:22:14.164 } 00:22:14.424 11:33:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:14.424 11:33:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.424 11:33:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:14.424 11:33:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.424 11:33:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:14.424 11:33:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2332435 00:22:14.424 11:33:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2332435 ']' 00:22:14.424 11:33:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2332435 00:22:14.424 11:33:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:14.424 11:33:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.424 11:33:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2332435 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2332435' 00:22:14.424 killing process with pid 2332435 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2332435 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2332435 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:14.424 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:14.684 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:14.684 [2024-11-19 11:33:25.941336] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:14.684 [2024-11-19 11:33:25.941384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332435 ] 00:22:14.684 [2024-11-19 11:33:26.018403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.684 [2024-11-19 11:33:26.059698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.684 [2024-11-19 11:33:26.797674] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name ee114e61-9027-40d4-9426-4ce7a05d9773 already exists 00:22:14.684 [2024-11-19 11:33:26.797701] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:ee114e61-9027-40d4-9426-4ce7a05d9773 alias for bdev NVMe1n1 00:22:14.684 [2024-11-19 11:33:26.797709] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:14.684 Running I/O for 1 seconds... 00:22:14.684 24449.00 IOPS, 95.50 MiB/s 00:22:14.684 Latency(us) 00:22:14.684 [2024-11-19T10:33:28.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.684 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:14.684 NVMe0n1 : 1.01 24495.92 95.69 0.00 0.00 5213.23 3148.58 10884.67 00:22:14.684 [2024-11-19T10:33:28.465Z] =================================================================================================================== 00:22:14.684 [2024-11-19T10:33:28.465Z] Total : 24495.92 95.69 0.00 0.00 5213.23 3148.58 10884.67 00:22:14.684 Received shutdown signal, test time was about 1.000000 seconds 00:22:14.684 00:22:14.684 Latency(us) 00:22:14.684 [2024-11-19T10:33:28.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.684 [2024-11-19T10:33:28.465Z] =================================================================================================================== 00:22:14.684 [2024-11-19T10:33:28.465Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:14.684 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.684 rmmod nvme_tcp 00:22:14.684 rmmod nvme_fabrics 00:22:14.684 rmmod nvme_keyring 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2332253 ']' 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2332253 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2332253 ']' 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2332253 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2332253 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2332253' 00:22:14.684 killing process with pid 2332253 00:22:14.684 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2332253 00:22:14.685 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2332253 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.944 11:33:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.851 11:33:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:16.851 00:22:16.851 real 0m11.314s 00:22:16.851 user 0m12.825s 00:22:16.851 sys 0m5.211s 00:22:16.851 11:33:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.851 11:33:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.851 ************************************ 00:22:16.851 END TEST nvmf_multicontroller 00:22:16.851 ************************************ 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.111 ************************************ 00:22:17.111 START TEST nvmf_aer 00:22:17.111 ************************************ 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:17.111 * Looking for test storage... 00:22:17.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:17.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.111 --rc genhtml_branch_coverage=1 00:22:17.111 --rc genhtml_function_coverage=1 00:22:17.111 --rc genhtml_legend=1 00:22:17.111 --rc geninfo_all_blocks=1 00:22:17.111 --rc geninfo_unexecuted_blocks=1 00:22:17.111 00:22:17.111 ' 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:17.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.111 --rc genhtml_branch_coverage=1 00:22:17.111 --rc genhtml_function_coverage=1 00:22:17.111 --rc genhtml_legend=1 00:22:17.111 --rc geninfo_all_blocks=1 00:22:17.111 --rc geninfo_unexecuted_blocks=1 00:22:17.111 00:22:17.111 ' 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:17.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.111 --rc genhtml_branch_coverage=1 00:22:17.111 --rc genhtml_function_coverage=1 00:22:17.111 --rc genhtml_legend=1 00:22:17.111 --rc geninfo_all_blocks=1 00:22:17.111 --rc geninfo_unexecuted_blocks=1 00:22:17.111 00:22:17.111 ' 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:17.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.111 --rc genhtml_branch_coverage=1 00:22:17.111 --rc genhtml_function_coverage=1 00:22:17.111 --rc genhtml_legend=1 00:22:17.111 --rc geninfo_all_blocks=1 00:22:17.111 --rc geninfo_unexecuted_blocks=1 00:22:17.111 00:22:17.111 ' 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.111 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.371 11:33:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:23.943 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.943 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:23.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:23.944 Found net devices under 0000:86:00.0: cvl_0_0 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:23.944 Found net devices under 0000:86:00.1: cvl_0_1 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:22:23.944 00:22:23.944 --- 10.0.0.2 ping statistics --- 00:22:23.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.944 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:22:23.944 00:22:23.944 --- 10.0.0.1 ping statistics --- 00:22:23.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.944 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2336270 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2336270 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2336270 ']' 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.944 11:33:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.944 [2024-11-19 11:33:36.862167] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:23.944 [2024-11-19 11:33:36.862215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.944 [2024-11-19 11:33:36.943702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.944 [2024-11-19 11:33:36.986909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.944 [2024-11-19 11:33:36.986946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.944 [2024-11-19 11:33:36.986957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.944 [2024-11-19 11:33:36.986963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.944 [2024-11-19 11:33:36.986968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.944 [2024-11-19 11:33:36.988531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.944 [2024-11-19 11:33:36.988644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.944 [2024-11-19 11:33:36.988775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.944 [2024-11-19 11:33:36.988777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.944 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.944 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:23.944 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.944 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.944 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.944 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.944 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.944 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.944 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 [2024-11-19 11:33:37.130159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 Malloc0 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 [2024-11-19 11:33:37.189356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 [ 00:22:23.945 { 00:22:23.945 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:23.945 "subtype": "Discovery", 00:22:23.945 "listen_addresses": [], 00:22:23.945 "allow_any_host": true, 00:22:23.945 "hosts": [] 00:22:23.945 }, 00:22:23.945 { 00:22:23.945 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.945 "subtype": "NVMe", 00:22:23.945 "listen_addresses": [ 00:22:23.945 { 00:22:23.945 "trtype": "TCP", 00:22:23.945 "adrfam": "IPv4", 00:22:23.945 "traddr": "10.0.0.2", 00:22:23.945 "trsvcid": "4420" 00:22:23.945 } 00:22:23.945 ], 00:22:23.945 "allow_any_host": true, 00:22:23.945 "hosts": [], 00:22:23.945 "serial_number": "SPDK00000000000001", 00:22:23.945 "model_number": "SPDK bdev Controller", 00:22:23.945 "max_namespaces": 2, 00:22:23.945 "min_cntlid": 1, 00:22:23.945 "max_cntlid": 65519, 00:22:23.945 "namespaces": [ 00:22:23.945 { 00:22:23.945 "nsid": 1, 00:22:23.945 "bdev_name": "Malloc0", 00:22:23.945 "name": "Malloc0", 00:22:23.945 "nguid": "04E9A0D141AB424E8DB0218042594F1B", 00:22:23.945 "uuid": "04e9a0d1-41ab-424e-8db0-218042594f1b" 00:22:23.945 } 00:22:23.945 ] 00:22:23.945 } 00:22:23.945 ] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2336341 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 Malloc1 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 Asynchronous Event Request test 00:22:23.945 Attaching to 10.0.0.2 00:22:23.945 Attached to 10.0.0.2 00:22:23.945 Registering asynchronous event callbacks... 00:22:23.945 Starting namespace attribute notice tests for all controllers... 00:22:23.945 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:23.945 aer_cb - Changed Namespace 00:22:23.945 Cleaning up... 00:22:23.945 [ 00:22:23.945 { 00:22:23.945 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:23.945 "subtype": "Discovery", 00:22:23.945 "listen_addresses": [], 00:22:23.945 "allow_any_host": true, 00:22:23.945 "hosts": [] 00:22:23.945 }, 00:22:23.945 { 00:22:23.945 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.945 "subtype": "NVMe", 00:22:23.945 "listen_addresses": [ 00:22:23.945 { 00:22:23.945 "trtype": "TCP", 00:22:23.945 "adrfam": "IPv4", 00:22:23.945 "traddr": "10.0.0.2", 00:22:23.945 "trsvcid": "4420" 00:22:23.945 } 00:22:23.945 ], 00:22:23.945 "allow_any_host": true, 00:22:23.945 "hosts": [], 00:22:23.945 "serial_number": "SPDK00000000000001", 00:22:23.945 "model_number": "SPDK bdev Controller", 00:22:23.945 "max_namespaces": 2, 00:22:23.945 "min_cntlid": 1, 00:22:23.945 "max_cntlid": 65519, 00:22:23.945 "namespaces": [ 00:22:23.945 { 00:22:23.945 "nsid": 1, 00:22:23.945 "bdev_name": "Malloc0", 00:22:23.945 "name": "Malloc0", 00:22:23.945 "nguid": "04E9A0D141AB424E8DB0218042594F1B", 00:22:23.945 "uuid": "04e9a0d1-41ab-424e-8db0-218042594f1b" 00:22:23.945 }, 00:22:23.945 { 00:22:23.945 "nsid": 2, 00:22:23.945 "bdev_name": "Malloc1", 00:22:23.945 "name": "Malloc1", 00:22:23.945 "nguid": "B45A8B4EE08841C3ACF30A9C73C22724", 00:22:23.945 "uuid": "b45a8b4e-e088-41c3-acf3-0a9c73c22724" 00:22:23.945 } 00:22:23.945 ] 00:22:23.945 } 00:22:23.945 ] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2336341 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.945 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.945 rmmod nvme_tcp 00:22:23.945 rmmod nvme_fabrics 00:22:23.945 rmmod nvme_keyring 00:22:23.946 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2336270 ']' 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2336270 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2336270 ']' 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2336270 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2336270 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2336270' 00:22:24.204 killing process with pid 2336270 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2336270 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2336270 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.204 11:33:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.739 00:22:26.739 real 0m9.327s 00:22:26.739 user 0m5.549s 00:22:26.739 sys 0m4.881s 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:26.739 ************************************ 00:22:26.739 END TEST nvmf_aer 00:22:26.739 ************************************ 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.739 ************************************ 00:22:26.739 START TEST nvmf_async_init 00:22:26.739 ************************************ 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:26.739 * Looking for test storage... 00:22:26.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:26.739 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:26.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.740 --rc genhtml_branch_coverage=1 00:22:26.740 --rc genhtml_function_coverage=1 00:22:26.740 --rc genhtml_legend=1 00:22:26.740 --rc geninfo_all_blocks=1 00:22:26.740 --rc geninfo_unexecuted_blocks=1 00:22:26.740 00:22:26.740 ' 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:26.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.740 --rc genhtml_branch_coverage=1 00:22:26.740 --rc genhtml_function_coverage=1 00:22:26.740 --rc genhtml_legend=1 00:22:26.740 --rc geninfo_all_blocks=1 00:22:26.740 --rc geninfo_unexecuted_blocks=1 00:22:26.740 00:22:26.740 ' 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:26.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.740 --rc genhtml_branch_coverage=1 00:22:26.740 --rc genhtml_function_coverage=1 00:22:26.740 --rc genhtml_legend=1 00:22:26.740 --rc geninfo_all_blocks=1 00:22:26.740 --rc geninfo_unexecuted_blocks=1 00:22:26.740 00:22:26.740 ' 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:26.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.740 --rc genhtml_branch_coverage=1 00:22:26.740 --rc genhtml_function_coverage=1 00:22:26.740 --rc genhtml_legend=1 00:22:26.740 --rc geninfo_all_blocks=1 00:22:26.740 --rc geninfo_unexecuted_blocks=1 00:22:26.740 00:22:26.740 ' 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=edb526e242994d5b9e71ee857f1764f8 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.740 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.741 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.741 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.741 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.741 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.741 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.741 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.741 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.741 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.741 11:33:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:33.316 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:33.316 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:33.316 Found net devices under 0000:86:00.0: cvl_0_0 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:33.316 Found net devices under 0000:86:00.1: cvl_0_1 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.316 11:33:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.316 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.316 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.316 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.316 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.316 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.316 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.316 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.316 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:22:33.316 00:22:33.316 --- 10.0.0.2 ping statistics --- 00:22:33.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.316 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:22:33.317 00:22:33.317 --- 10.0.0.1 ping statistics --- 00:22:33.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.317 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2340038 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2340038 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2340038 ']' 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.317 11:33:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.317 [2024-11-19 11:33:46.294182] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:33.317 [2024-11-19 11:33:46.294231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.317 [2024-11-19 11:33:46.374083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.317 [2024-11-19 11:33:46.413404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.317 [2024-11-19 11:33:46.413439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.317 [2024-11-19 11:33:46.413446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.317 [2024-11-19 11:33:46.413451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.317 [2024-11-19 11:33:46.413456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.317 [2024-11-19 11:33:46.414084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.577 [2024-11-19 11:33:47.166644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.577 null0 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g edb526e242994d5b9e71ee857f1764f8 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.577 [2024-11-19 11:33:47.210897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.577 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.837 nvme0n1 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.837 [ 00:22:33.837 { 00:22:33.837 "name": "nvme0n1", 00:22:33.837 "aliases": [ 00:22:33.837 "edb526e2-4299-4d5b-9e71-ee857f1764f8" 00:22:33.837 ], 00:22:33.837 "product_name": "NVMe disk", 00:22:33.837 "block_size": 512, 00:22:33.837 "num_blocks": 2097152, 00:22:33.837 "uuid": "edb526e2-4299-4d5b-9e71-ee857f1764f8", 00:22:33.837 "numa_id": 1, 00:22:33.837 "assigned_rate_limits": { 00:22:33.837 "rw_ios_per_sec": 0, 00:22:33.837 "rw_mbytes_per_sec": 0, 00:22:33.837 "r_mbytes_per_sec": 0, 00:22:33.837 "w_mbytes_per_sec": 0 00:22:33.837 }, 00:22:33.837 "claimed": false, 00:22:33.837 "zoned": false, 00:22:33.837 "supported_io_types": { 00:22:33.837 "read": true, 00:22:33.837 "write": true, 00:22:33.837 "unmap": false, 00:22:33.837 "flush": true, 00:22:33.837 "reset": true, 00:22:33.837 "nvme_admin": true, 00:22:33.837 "nvme_io": true, 00:22:33.837 "nvme_io_md": false, 00:22:33.837 "write_zeroes": true, 00:22:33.837 "zcopy": false, 00:22:33.837 "get_zone_info": false, 00:22:33.837 "zone_management": false, 00:22:33.837 "zone_append": false, 00:22:33.837 "compare": true, 00:22:33.837 "compare_and_write": true, 00:22:33.837 "abort": true, 00:22:33.837 "seek_hole": false, 00:22:33.837 "seek_data": false, 00:22:33.837 "copy": true, 00:22:33.837 "nvme_iov_md": false 00:22:33.837 }, 00:22:33.837 "memory_domains": [ 00:22:33.837 { 00:22:33.837 "dma_device_id": "system", 00:22:33.837 "dma_device_type": 1 00:22:33.837 } 00:22:33.837 ], 00:22:33.837 "driver_specific": { 00:22:33.837 "nvme": [ 00:22:33.837 { 00:22:33.837 "trid": { 00:22:33.837 "trtype": "TCP", 00:22:33.837 "adrfam": "IPv4", 00:22:33.837 "traddr": "10.0.0.2", 00:22:33.837 "trsvcid": "4420", 00:22:33.837 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:33.837 }, 00:22:33.837 "ctrlr_data": { 00:22:33.837 "cntlid": 1, 00:22:33.837 "vendor_id": "0x8086", 00:22:33.837 "model_number": "SPDK bdev Controller", 00:22:33.837 "serial_number": "00000000000000000000", 00:22:33.837 "firmware_revision": "25.01", 00:22:33.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:33.837 "oacs": { 00:22:33.837 "security": 0, 00:22:33.837 "format": 0, 00:22:33.837 "firmware": 0, 00:22:33.837 "ns_manage": 0 00:22:33.837 }, 00:22:33.837 "multi_ctrlr": true, 00:22:33.837 "ana_reporting": false 00:22:33.837 }, 00:22:33.837 "vs": { 00:22:33.837 "nvme_version": "1.3" 00:22:33.837 }, 00:22:33.837 "ns_data": { 00:22:33.837 "id": 1, 00:22:33.837 "can_share": true 00:22:33.837 } 00:22:33.837 } 00:22:33.837 ], 00:22:33.837 "mp_policy": "active_passive" 00:22:33.837 } 00:22:33.837 } 00:22:33.837 ] 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.837 [2024-11-19 11:33:47.476638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:33.837 [2024-11-19 11:33:47.476703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad1220 (9): Bad file descriptor 00:22:33.837 [2024-11-19 11:33:47.609038] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.837 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.098 [ 00:22:34.098 { 00:22:34.098 "name": "nvme0n1", 00:22:34.098 "aliases": [ 00:22:34.098 "edb526e2-4299-4d5b-9e71-ee857f1764f8" 00:22:34.098 ], 00:22:34.098 "product_name": "NVMe disk", 00:22:34.098 "block_size": 512, 00:22:34.098 "num_blocks": 2097152, 00:22:34.098 "uuid": "edb526e2-4299-4d5b-9e71-ee857f1764f8", 00:22:34.098 "numa_id": 1, 00:22:34.098 "assigned_rate_limits": { 00:22:34.098 "rw_ios_per_sec": 0, 00:22:34.098 "rw_mbytes_per_sec": 0, 00:22:34.098 "r_mbytes_per_sec": 0, 00:22:34.098 "w_mbytes_per_sec": 0 00:22:34.098 }, 00:22:34.098 "claimed": false, 00:22:34.098 "zoned": false, 00:22:34.098 "supported_io_types": { 00:22:34.098 "read": true, 00:22:34.098 "write": true, 00:22:34.098 "unmap": false, 00:22:34.098 "flush": true, 00:22:34.098 "reset": true, 00:22:34.098 "nvme_admin": true, 00:22:34.098 "nvme_io": true, 00:22:34.098 "nvme_io_md": false, 00:22:34.098 "write_zeroes": true, 00:22:34.098 "zcopy": false, 00:22:34.098 "get_zone_info": false, 00:22:34.098 "zone_management": false, 00:22:34.098 "zone_append": false, 00:22:34.098 "compare": true, 00:22:34.098 "compare_and_write": true, 00:22:34.098 "abort": true, 00:22:34.098 "seek_hole": false, 00:22:34.098 "seek_data": false, 00:22:34.098 "copy": true, 00:22:34.098 "nvme_iov_md": false 00:22:34.098 }, 00:22:34.098 "memory_domains": [ 00:22:34.098 { 00:22:34.098 "dma_device_id": "system", 00:22:34.098 "dma_device_type": 1 00:22:34.098 } 00:22:34.098 ], 00:22:34.099 "driver_specific": { 00:22:34.099 "nvme": [ 00:22:34.099 { 00:22:34.099 "trid": { 00:22:34.099 "trtype": "TCP", 00:22:34.099 "adrfam": "IPv4", 00:22:34.099 "traddr": "10.0.0.2", 00:22:34.099 "trsvcid": "4420", 00:22:34.099 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:34.099 }, 00:22:34.099 "ctrlr_data": { 00:22:34.099 "cntlid": 2, 00:22:34.099 "vendor_id": "0x8086", 00:22:34.099 "model_number": "SPDK bdev Controller", 00:22:34.099 "serial_number": "00000000000000000000", 00:22:34.099 "firmware_revision": "25.01", 00:22:34.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.099 "oacs": { 00:22:34.099 "security": 0, 00:22:34.099 "format": 0, 00:22:34.099 "firmware": 0, 00:22:34.099 "ns_manage": 0 00:22:34.099 }, 00:22:34.099 "multi_ctrlr": true, 00:22:34.099 "ana_reporting": false 00:22:34.099 }, 00:22:34.099 "vs": { 00:22:34.099 "nvme_version": "1.3" 00:22:34.099 }, 00:22:34.099 "ns_data": { 00:22:34.099 "id": 1, 00:22:34.099 "can_share": true 00:22:34.099 } 00:22:34.099 } 00:22:34.099 ], 00:22:34.099 "mp_policy": "active_passive" 00:22:34.099 } 00:22:34.099 } 00:22:34.099 ] 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.wbDTHrzjr8 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.wbDTHrzjr8 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.wbDTHrzjr8 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 [2024-11-19 11:33:47.685277] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:34.099 [2024-11-19 11:33:47.685377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 [2024-11-19 11:33:47.701333] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.099 nvme0n1 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 [ 00:22:34.099 { 00:22:34.099 "name": "nvme0n1", 00:22:34.099 "aliases": [ 00:22:34.099 "edb526e2-4299-4d5b-9e71-ee857f1764f8" 00:22:34.099 ], 00:22:34.099 "product_name": "NVMe disk", 00:22:34.099 "block_size": 512, 00:22:34.099 "num_blocks": 2097152, 00:22:34.099 "uuid": "edb526e2-4299-4d5b-9e71-ee857f1764f8", 00:22:34.099 "numa_id": 1, 00:22:34.099 "assigned_rate_limits": { 00:22:34.099 "rw_ios_per_sec": 0, 00:22:34.099 "rw_mbytes_per_sec": 0, 00:22:34.099 "r_mbytes_per_sec": 0, 00:22:34.099 "w_mbytes_per_sec": 0 00:22:34.099 }, 00:22:34.099 "claimed": false, 00:22:34.099 "zoned": false, 00:22:34.099 "supported_io_types": { 00:22:34.099 "read": true, 00:22:34.099 "write": true, 00:22:34.099 "unmap": false, 00:22:34.099 "flush": true, 00:22:34.099 "reset": true, 00:22:34.099 "nvme_admin": true, 00:22:34.099 "nvme_io": true, 00:22:34.099 "nvme_io_md": false, 00:22:34.099 "write_zeroes": true, 00:22:34.099 "zcopy": false, 00:22:34.099 "get_zone_info": false, 00:22:34.099 "zone_management": false, 00:22:34.099 "zone_append": false, 00:22:34.099 "compare": true, 00:22:34.099 "compare_and_write": true, 00:22:34.099 "abort": true, 00:22:34.099 "seek_hole": false, 00:22:34.099 "seek_data": false, 00:22:34.099 "copy": true, 00:22:34.099 "nvme_iov_md": false 00:22:34.099 }, 00:22:34.099 "memory_domains": [ 00:22:34.099 { 00:22:34.099 "dma_device_id": "system", 00:22:34.099 "dma_device_type": 1 00:22:34.099 } 00:22:34.099 ], 00:22:34.099 "driver_specific": { 00:22:34.099 "nvme": [ 00:22:34.099 { 00:22:34.099 "trid": { 00:22:34.099 "trtype": "TCP", 00:22:34.099 "adrfam": "IPv4", 00:22:34.099 "traddr": "10.0.0.2", 00:22:34.099 "trsvcid": "4421", 00:22:34.099 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:34.099 }, 00:22:34.099 "ctrlr_data": { 00:22:34.099 "cntlid": 3, 00:22:34.099 "vendor_id": "0x8086", 00:22:34.099 "model_number": "SPDK bdev Controller", 00:22:34.099 "serial_number": "00000000000000000000", 00:22:34.099 "firmware_revision": "25.01", 00:22:34.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.099 "oacs": { 00:22:34.099 "security": 0, 00:22:34.099 "format": 0, 00:22:34.099 "firmware": 0, 00:22:34.099 "ns_manage": 0 00:22:34.099 }, 00:22:34.099 "multi_ctrlr": true, 00:22:34.099 "ana_reporting": false 00:22:34.099 }, 00:22:34.099 "vs": { 00:22:34.099 "nvme_version": "1.3" 00:22:34.099 }, 00:22:34.099 "ns_data": { 00:22:34.099 "id": 1, 00:22:34.099 "can_share": true 00:22:34.099 } 00:22:34.099 } 00:22:34.099 ], 00:22:34.099 "mp_policy": "active_passive" 00:22:34.099 } 00:22:34.099 } 00:22:34.099 ] 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.wbDTHrzjr8 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.099 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.099 rmmod nvme_tcp 00:22:34.099 rmmod nvme_fabrics 00:22:34.099 rmmod nvme_keyring 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2340038 ']' 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2340038 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2340038 ']' 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2340038 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2340038 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2340038' 00:22:34.359 killing process with pid 2340038 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2340038 00:22:34.359 11:33:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2340038 00:22:34.359 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.359 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.359 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.359 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:34.360 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:34.360 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.360 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.360 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.360 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.360 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.360 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.360 11:33:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.898 00:22:36.898 real 0m10.077s 00:22:36.898 user 0m3.820s 00:22:36.898 sys 0m4.866s 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:36.898 ************************************ 00:22:36.898 END TEST nvmf_async_init 00:22:36.898 ************************************ 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.898 ************************************ 00:22:36.898 START TEST dma 00:22:36.898 ************************************ 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:36.898 * Looking for test storage... 00:22:36.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:36.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.898 --rc genhtml_branch_coverage=1 00:22:36.898 --rc genhtml_function_coverage=1 00:22:36.898 --rc genhtml_legend=1 00:22:36.898 --rc geninfo_all_blocks=1 00:22:36.898 --rc geninfo_unexecuted_blocks=1 00:22:36.898 00:22:36.898 ' 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:36.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.898 --rc genhtml_branch_coverage=1 00:22:36.898 --rc genhtml_function_coverage=1 00:22:36.898 --rc genhtml_legend=1 00:22:36.898 --rc geninfo_all_blocks=1 00:22:36.898 --rc geninfo_unexecuted_blocks=1 00:22:36.898 00:22:36.898 ' 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:36.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.898 --rc genhtml_branch_coverage=1 00:22:36.898 --rc genhtml_function_coverage=1 00:22:36.898 --rc genhtml_legend=1 00:22:36.898 --rc geninfo_all_blocks=1 00:22:36.898 --rc geninfo_unexecuted_blocks=1 00:22:36.898 00:22:36.898 ' 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:36.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.898 --rc genhtml_branch_coverage=1 00:22:36.898 --rc genhtml_function_coverage=1 00:22:36.898 --rc genhtml_legend=1 00:22:36.898 --rc geninfo_all_blocks=1 00:22:36.898 --rc geninfo_unexecuted_blocks=1 00:22:36.898 00:22:36.898 ' 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.898 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:36.899 00:22:36.899 real 0m0.215s 00:22:36.899 user 0m0.139s 00:22:36.899 sys 0m0.089s 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:36.899 ************************************ 00:22:36.899 END TEST dma 00:22:36.899 ************************************ 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.899 ************************************ 00:22:36.899 START TEST nvmf_identify 00:22:36.899 ************************************ 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:36.899 * Looking for test storage... 00:22:36.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.899 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:37.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.159 --rc genhtml_branch_coverage=1 00:22:37.159 --rc genhtml_function_coverage=1 00:22:37.159 --rc genhtml_legend=1 00:22:37.159 --rc geninfo_all_blocks=1 00:22:37.159 --rc geninfo_unexecuted_blocks=1 00:22:37.159 00:22:37.159 ' 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:37.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.159 --rc genhtml_branch_coverage=1 00:22:37.159 --rc genhtml_function_coverage=1 00:22:37.159 --rc genhtml_legend=1 00:22:37.159 --rc geninfo_all_blocks=1 00:22:37.159 --rc geninfo_unexecuted_blocks=1 00:22:37.159 00:22:37.159 ' 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:37.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.159 --rc genhtml_branch_coverage=1 00:22:37.159 --rc genhtml_function_coverage=1 00:22:37.159 --rc genhtml_legend=1 00:22:37.159 --rc geninfo_all_blocks=1 00:22:37.159 --rc geninfo_unexecuted_blocks=1 00:22:37.159 00:22:37.159 ' 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:37.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.159 --rc genhtml_branch_coverage=1 00:22:37.159 --rc genhtml_function_coverage=1 00:22:37.159 --rc genhtml_legend=1 00:22:37.159 --rc geninfo_all_blocks=1 00:22:37.159 --rc geninfo_unexecuted_blocks=1 00:22:37.159 00:22:37.159 ' 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.159 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.160 11:33:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:43.745 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:43.745 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.745 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:43.746 Found net devices under 0000:86:00.0: cvl_0_0 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:43.746 Found net devices under 0000:86:00.1: cvl_0_1 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:22:43.746 00:22:43.746 --- 10.0.0.2 ping statistics --- 00:22:43.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.746 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:22:43.746 00:22:43.746 --- 10.0.0.1 ping statistics --- 00:22:43.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.746 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2343866 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2343866 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2343866 ']' 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.746 [2024-11-19 11:33:56.659358] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:43.746 [2024-11-19 11:33:56.659408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.746 [2024-11-19 11:33:56.723219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.746 [2024-11-19 11:33:56.767717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.746 [2024-11-19 11:33:56.767755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.746 [2024-11-19 11:33:56.767762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.746 [2024-11-19 11:33:56.767768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.746 [2024-11-19 11:33:56.767773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.746 [2024-11-19 11:33:56.769402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.746 [2024-11-19 11:33:56.769521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.746 [2024-11-19 11:33:56.769630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.746 [2024-11-19 11:33:56.769632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.746 [2024-11-19 11:33:56.866867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.746 Malloc0 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.746 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.747 [2024-11-19 11:33:56.980660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.747 11:33:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.747 [ 00:22:43.747 { 00:22:43.747 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:43.747 "subtype": "Discovery", 00:22:43.747 "listen_addresses": [ 00:22:43.747 { 00:22:43.747 "trtype": "TCP", 00:22:43.747 "adrfam": "IPv4", 00:22:43.747 "traddr": "10.0.0.2", 00:22:43.747 "trsvcid": "4420" 00:22:43.747 } 00:22:43.747 ], 00:22:43.747 "allow_any_host": true, 00:22:43.747 "hosts": [] 00:22:43.747 }, 00:22:43.747 { 00:22:43.747 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.747 "subtype": "NVMe", 00:22:43.747 "listen_addresses": [ 00:22:43.747 { 00:22:43.747 "trtype": "TCP", 00:22:43.747 "adrfam": "IPv4", 00:22:43.747 "traddr": "10.0.0.2", 00:22:43.747 "trsvcid": "4420" 00:22:43.747 } 00:22:43.747 ], 00:22:43.747 "allow_any_host": true, 00:22:43.747 "hosts": [], 00:22:43.747 "serial_number": "SPDK00000000000001", 00:22:43.747 "model_number": "SPDK bdev Controller", 00:22:43.747 "max_namespaces": 32, 00:22:43.747 "min_cntlid": 1, 00:22:43.747 "max_cntlid": 65519, 00:22:43.747 "namespaces": [ 00:22:43.747 { 00:22:43.747 "nsid": 1, 00:22:43.747 "bdev_name": "Malloc0", 00:22:43.747 "name": "Malloc0", 00:22:43.747 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:43.747 "eui64": "ABCDEF0123456789", 00:22:43.747 "uuid": "01d56838-98d2-464f-be02-b3b4adcbd27f" 00:22:43.747 } 00:22:43.747 ] 00:22:43.747 } 00:22:43.747 ] 00:22:43.747 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.747 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:43.747 [2024-11-19 11:33:57.031548] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:43.747 [2024-11-19 11:33:57.031581] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343890 ] 00:22:43.747 [2024-11-19 11:33:57.072905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:43.747 [2024-11-19 11:33:57.076956] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:43.747 [2024-11-19 11:33:57.076963] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:43.747 [2024-11-19 11:33:57.076976] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:43.747 [2024-11-19 11:33:57.076988] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:43.747 [2024-11-19 11:33:57.077501] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:43.747 [2024-11-19 11:33:57.077533] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1218690 0 00:22:43.747 [2024-11-19 11:33:57.083966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:43.747 [2024-11-19 11:33:57.083982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:43.747 [2024-11-19 11:33:57.083987] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:43.747 [2024-11-19 11:33:57.083990] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:43.747 [2024-11-19 11:33:57.084024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.084029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.084033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1218690) 00:22:43.747 [2024-11-19 11:33:57.084046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:43.747 [2024-11-19 11:33:57.084064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a100, cid 0, qid 0 00:22:43.747 [2024-11-19 11:33:57.090959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.747 [2024-11-19 11:33:57.090970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.747 [2024-11-19 11:33:57.090974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.090979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a100) on tqpair=0x1218690 00:22:43.747 [2024-11-19 11:33:57.090991] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:43.747 [2024-11-19 11:33:57.090999] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:43.747 [2024-11-19 11:33:57.091005] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:43.747 [2024-11-19 11:33:57.091021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1218690) 00:22:43.747 [2024-11-19 11:33:57.091042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.747 [2024-11-19 11:33:57.091057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a100, cid 0, qid 0 00:22:43.747 [2024-11-19 11:33:57.091192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.747 [2024-11-19 11:33:57.091201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.747 [2024-11-19 11:33:57.091206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a100) on tqpair=0x1218690 00:22:43.747 [2024-11-19 11:33:57.091218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:43.747 [2024-11-19 11:33:57.091226] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:43.747 [2024-11-19 11:33:57.091233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1218690) 00:22:43.747 [2024-11-19 11:33:57.091249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.747 [2024-11-19 11:33:57.091260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a100, cid 0, qid 0 00:22:43.747 [2024-11-19 11:33:57.091325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.747 [2024-11-19 11:33:57.091332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.747 [2024-11-19 11:33:57.091334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a100) on tqpair=0x1218690 00:22:43.747 [2024-11-19 11:33:57.091344] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:43.747 [2024-11-19 11:33:57.091351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:43.747 [2024-11-19 11:33:57.091357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1218690) 00:22:43.747 [2024-11-19 11:33:57.091369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.747 [2024-11-19 11:33:57.091379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a100, cid 0, qid 0 00:22:43.747 [2024-11-19 11:33:57.091444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.747 [2024-11-19 11:33:57.091450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.747 [2024-11-19 11:33:57.091453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a100) on tqpair=0x1218690 00:22:43.747 [2024-11-19 11:33:57.091462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:43.747 [2024-11-19 11:33:57.091470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1218690) 00:22:43.747 [2024-11-19 11:33:57.091482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.747 [2024-11-19 11:33:57.091494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a100, cid 0, qid 0 00:22:43.747 [2024-11-19 11:33:57.091558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.747 [2024-11-19 11:33:57.091564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.747 [2024-11-19 11:33:57.091567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.747 [2024-11-19 11:33:57.091570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a100) on tqpair=0x1218690 00:22:43.748 [2024-11-19 11:33:57.091575] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:43.748 [2024-11-19 11:33:57.091580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:43.748 [2024-11-19 11:33:57.091586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:43.748 [2024-11-19 11:33:57.091694] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:43.748 [2024-11-19 11:33:57.091699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:43.748 [2024-11-19 11:33:57.091708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.091712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.091716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1218690) 00:22:43.748 [2024-11-19 11:33:57.091723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.748 [2024-11-19 11:33:57.091736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a100, cid 0, qid 0 00:22:43.748 [2024-11-19 11:33:57.091801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.748 [2024-11-19 11:33:57.091808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.748 [2024-11-19 11:33:57.091811] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.091815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a100) on tqpair=0x1218690 00:22:43.748 [2024-11-19 11:33:57.091820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:43.748 [2024-11-19 11:33:57.091830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.091836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.091841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1218690) 00:22:43.748 [2024-11-19 11:33:57.091847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.748 [2024-11-19 11:33:57.091857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a100, cid 0, qid 0 00:22:43.748 [2024-11-19 11:33:57.091917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.748 [2024-11-19 11:33:57.091923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.748 [2024-11-19 11:33:57.091926] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.091930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a100) on tqpair=0x1218690 00:22:43.748 [2024-11-19 11:33:57.091934] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:43.748 [2024-11-19 11:33:57.091941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:43.748 [2024-11-19 11:33:57.091956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:43.748 [2024-11-19 11:33:57.091967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:43.748 [2024-11-19 11:33:57.091977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.091981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1218690) 00:22:43.748 [2024-11-19 11:33:57.091989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.748 [2024-11-19 11:33:57.092001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a100, cid 0, qid 0 00:22:43.748 [2024-11-19 11:33:57.092091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.748 [2024-11-19 11:33:57.092098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.748 [2024-11-19 11:33:57.092103] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.092107] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1218690): datao=0, datal=4096, cccid=0 00:22:43.748 [2024-11-19 11:33:57.092111] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127a100) on tqpair(0x1218690): expected_datao=0, payload_size=4096 00:22:43.748 [2024-11-19 11:33:57.092115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.092129] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.092134] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.748 [2024-11-19 11:33:57.133096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.748 [2024-11-19 11:33:57.133099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a100) on tqpair=0x1218690 00:22:43.748 [2024-11-19 11:33:57.133111] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:43.748 [2024-11-19 11:33:57.133116] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:43.748 [2024-11-19 11:33:57.133120] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:43.748 [2024-11-19 11:33:57.133129] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:43.748 [2024-11-19 11:33:57.133133] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:43.748 [2024-11-19 11:33:57.133138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:43.748 [2024-11-19 11:33:57.133149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:43.748 [2024-11-19 11:33:57.133156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1218690) 00:22:43.748 [2024-11-19 11:33:57.133171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:43.748 [2024-11-19 11:33:57.133183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a100, cid 0, qid 0 00:22:43.748 [2024-11-19 11:33:57.133251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.748 [2024-11-19 11:33:57.133257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.748 [2024-11-19 11:33:57.133260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a100) on tqpair=0x1218690 00:22:43.748 [2024-11-19 11:33:57.133273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1218690) 00:22:43.748 [2024-11-19 11:33:57.133285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.748 [2024-11-19 11:33:57.133291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1218690) 00:22:43.748 [2024-11-19 11:33:57.133302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.748 [2024-11-19 11:33:57.133307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1218690) 00:22:43.748 [2024-11-19 11:33:57.133318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.748 [2024-11-19 11:33:57.133323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.748 [2024-11-19 11:33:57.133334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.748 [2024-11-19 11:33:57.133339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:43.748 [2024-11-19 11:33:57.133346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:43.748 [2024-11-19 11:33:57.133352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1218690) 00:22:43.748 [2024-11-19 11:33:57.133361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.748 [2024-11-19 11:33:57.133372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a100, cid 0, qid 0 00:22:43.748 [2024-11-19 11:33:57.133377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a280, cid 1, qid 0 00:22:43.748 [2024-11-19 11:33:57.133381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a400, cid 2, qid 0 00:22:43.748 [2024-11-19 11:33:57.133385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.748 [2024-11-19 11:33:57.133389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a700, cid 4, qid 0 00:22:43.748 [2024-11-19 11:33:57.133490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.748 [2024-11-19 11:33:57.133495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.748 [2024-11-19 11:33:57.133498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a700) on tqpair=0x1218690 00:22:43.748 [2024-11-19 11:33:57.133509] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:43.748 [2024-11-19 11:33:57.133513] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:43.748 [2024-11-19 11:33:57.133525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.748 [2024-11-19 11:33:57.133529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1218690) 00:22:43.748 [2024-11-19 11:33:57.133534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.748 [2024-11-19 11:33:57.133544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a700, cid 4, qid 0 00:22:43.748 [2024-11-19 11:33:57.133618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.749 [2024-11-19 11:33:57.133624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.749 [2024-11-19 11:33:57.133627] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133630] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1218690): datao=0, datal=4096, cccid=4 00:22:43.749 [2024-11-19 11:33:57.133634] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127a700) on tqpair(0x1218690): expected_datao=0, payload_size=4096 00:22:43.749 [2024-11-19 11:33:57.133638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133644] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133647] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.749 [2024-11-19 11:33:57.133669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.749 [2024-11-19 11:33:57.133672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a700) on tqpair=0x1218690 00:22:43.749 [2024-11-19 11:33:57.133686] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:43.749 [2024-11-19 11:33:57.133707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1218690) 00:22:43.749 [2024-11-19 11:33:57.133716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.749 [2024-11-19 11:33:57.133722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1218690) 00:22:43.749 [2024-11-19 11:33:57.133734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.749 [2024-11-19 11:33:57.133748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a700, cid 4, qid 0 00:22:43.749 [2024-11-19 11:33:57.133752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a880, cid 5, qid 0 00:22:43.749 [2024-11-19 11:33:57.133856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.749 [2024-11-19 11:33:57.133861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.749 [2024-11-19 11:33:57.133864] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133867] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1218690): datao=0, datal=1024, cccid=4 00:22:43.749 [2024-11-19 11:33:57.133871] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127a700) on tqpair(0x1218690): expected_datao=0, payload_size=1024 00:22:43.749 [2024-11-19 11:33:57.133875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133881] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133884] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.749 [2024-11-19 11:33:57.133895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.749 [2024-11-19 11:33:57.133898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.133902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a880) on tqpair=0x1218690 00:22:43.749 [2024-11-19 11:33:57.177956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.749 [2024-11-19 11:33:57.177967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.749 [2024-11-19 11:33:57.177970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.177974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a700) on tqpair=0x1218690 00:22:43.749 [2024-11-19 11:33:57.177986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.177990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1218690) 00:22:43.749 [2024-11-19 11:33:57.177997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.749 [2024-11-19 11:33:57.178013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a700, cid 4, qid 0 00:22:43.749 [2024-11-19 11:33:57.178179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.749 [2024-11-19 11:33:57.178185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.749 [2024-11-19 11:33:57.178189] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.178192] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1218690): datao=0, datal=3072, cccid=4 00:22:43.749 [2024-11-19 11:33:57.178196] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127a700) on tqpair(0x1218690): expected_datao=0, payload_size=3072 00:22:43.749 [2024-11-19 11:33:57.178199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.178205] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.178209] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.178242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.749 [2024-11-19 11:33:57.178247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.749 [2024-11-19 11:33:57.178250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.178254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a700) on tqpair=0x1218690 00:22:43.749 [2024-11-19 11:33:57.178261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.178265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1218690) 00:22:43.749 [2024-11-19 11:33:57.178270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.749 [2024-11-19 11:33:57.178283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a700, cid 4, qid 0 00:22:43.749 [2024-11-19 11:33:57.178358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.749 [2024-11-19 11:33:57.178364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.749 [2024-11-19 11:33:57.178367] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.178370] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1218690): datao=0, datal=8, cccid=4 00:22:43.749 [2024-11-19 11:33:57.178374] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x127a700) on tqpair(0x1218690): expected_datao=0, payload_size=8 00:22:43.749 [2024-11-19 11:33:57.178377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.178383] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.178386] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.219064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.749 [2024-11-19 11:33:57.219075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.749 [2024-11-19 11:33:57.219081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.749 [2024-11-19 11:33:57.219085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a700) on tqpair=0x1218690 00:22:43.749 ===================================================== 00:22:43.749 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:43.749 ===================================================== 00:22:43.749 Controller Capabilities/Features 00:22:43.749 ================================ 00:22:43.749 Vendor ID: 0000 00:22:43.749 Subsystem Vendor ID: 0000 00:22:43.749 Serial Number: .................... 00:22:43.749 Model Number: ........................................ 00:22:43.749 Firmware Version: 25.01 00:22:43.749 Recommended Arb Burst: 0 00:22:43.749 IEEE OUI Identifier: 00 00 00 00:22:43.749 Multi-path I/O 00:22:43.749 May have multiple subsystem ports: No 00:22:43.749 May have multiple controllers: No 00:22:43.749 Associated with SR-IOV VF: No 00:22:43.749 Max Data Transfer Size: 131072 00:22:43.749 Max Number of Namespaces: 0 00:22:43.749 Max Number of I/O Queues: 1024 00:22:43.749 NVMe Specification Version (VS): 1.3 00:22:43.749 NVMe Specification Version (Identify): 1.3 00:22:43.749 Maximum Queue Entries: 128 00:22:43.749 Contiguous Queues Required: Yes 00:22:43.749 Arbitration Mechanisms Supported 00:22:43.749 Weighted Round Robin: Not Supported 00:22:43.749 Vendor Specific: Not Supported 00:22:43.749 Reset Timeout: 15000 ms 00:22:43.749 Doorbell Stride: 4 bytes 00:22:43.749 NVM Subsystem Reset: Not Supported 00:22:43.749 Command Sets Supported 00:22:43.749 NVM Command Set: Supported 00:22:43.749 Boot Partition: Not Supported 00:22:43.749 Memory Page Size Minimum: 4096 bytes 00:22:43.749 Memory Page Size Maximum: 4096 bytes 00:22:43.749 Persistent Memory Region: Not Supported 00:22:43.749 Optional Asynchronous Events Supported 00:22:43.749 Namespace Attribute Notices: Not Supported 00:22:43.749 Firmware Activation Notices: Not Supported 00:22:43.749 ANA Change Notices: Not Supported 00:22:43.749 PLE Aggregate Log Change Notices: Not Supported 00:22:43.749 LBA Status Info Alert Notices: Not Supported 00:22:43.749 EGE Aggregate Log Change Notices: Not Supported 00:22:43.749 Normal NVM Subsystem Shutdown event: Not Supported 00:22:43.749 Zone Descriptor Change Notices: Not Supported 00:22:43.749 Discovery Log Change Notices: Supported 00:22:43.749 Controller Attributes 00:22:43.749 128-bit Host Identifier: Not Supported 00:22:43.749 Non-Operational Permissive Mode: Not Supported 00:22:43.749 NVM Sets: Not Supported 00:22:43.749 Read Recovery Levels: Not Supported 00:22:43.749 Endurance Groups: Not Supported 00:22:43.749 Predictable Latency Mode: Not Supported 00:22:43.749 Traffic Based Keep ALive: Not Supported 00:22:43.749 Namespace Granularity: Not Supported 00:22:43.749 SQ Associations: Not Supported 00:22:43.749 UUID List: Not Supported 00:22:43.749 Multi-Domain Subsystem: Not Supported 00:22:43.749 Fixed Capacity Management: Not Supported 00:22:43.749 Variable Capacity Management: Not Supported 00:22:43.750 Delete Endurance Group: Not Supported 00:22:43.750 Delete NVM Set: Not Supported 00:22:43.750 Extended LBA Formats Supported: Not Supported 00:22:43.750 Flexible Data Placement Supported: Not Supported 00:22:43.750 00:22:43.750 Controller Memory Buffer Support 00:22:43.750 ================================ 00:22:43.750 Supported: No 00:22:43.750 00:22:43.750 Persistent Memory Region Support 00:22:43.750 ================================ 00:22:43.750 Supported: No 00:22:43.750 00:22:43.750 Admin Command Set Attributes 00:22:43.750 ============================ 00:22:43.750 Security Send/Receive: Not Supported 00:22:43.750 Format NVM: Not Supported 00:22:43.750 Firmware Activate/Download: Not Supported 00:22:43.750 Namespace Management: Not Supported 00:22:43.750 Device Self-Test: Not Supported 00:22:43.750 Directives: Not Supported 00:22:43.750 NVMe-MI: Not Supported 00:22:43.750 Virtualization Management: Not Supported 00:22:43.750 Doorbell Buffer Config: Not Supported 00:22:43.750 Get LBA Status Capability: Not Supported 00:22:43.750 Command & Feature Lockdown Capability: Not Supported 00:22:43.750 Abort Command Limit: 1 00:22:43.750 Async Event Request Limit: 4 00:22:43.750 Number of Firmware Slots: N/A 00:22:43.750 Firmware Slot 1 Read-Only: N/A 00:22:43.750 Firmware Activation Without Reset: N/A 00:22:43.750 Multiple Update Detection Support: N/A 00:22:43.750 Firmware Update Granularity: No Information Provided 00:22:43.750 Per-Namespace SMART Log: No 00:22:43.750 Asymmetric Namespace Access Log Page: Not Supported 00:22:43.750 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:43.750 Command Effects Log Page: Not Supported 00:22:43.750 Get Log Page Extended Data: Supported 00:22:43.750 Telemetry Log Pages: Not Supported 00:22:43.750 Persistent Event Log Pages: Not Supported 00:22:43.750 Supported Log Pages Log Page: May Support 00:22:43.750 Commands Supported & Effects Log Page: Not Supported 00:22:43.750 Feature Identifiers & Effects Log Page:May Support 00:22:43.750 NVMe-MI Commands & Effects Log Page: May Support 00:22:43.750 Data Area 4 for Telemetry Log: Not Supported 00:22:43.750 Error Log Page Entries Supported: 128 00:22:43.750 Keep Alive: Not Supported 00:22:43.750 00:22:43.750 NVM Command Set Attributes 00:22:43.750 ========================== 00:22:43.750 Submission Queue Entry Size 00:22:43.750 Max: 1 00:22:43.750 Min: 1 00:22:43.750 Completion Queue Entry Size 00:22:43.750 Max: 1 00:22:43.750 Min: 1 00:22:43.750 Number of Namespaces: 0 00:22:43.750 Compare Command: Not Supported 00:22:43.750 Write Uncorrectable Command: Not Supported 00:22:43.750 Dataset Management Command: Not Supported 00:22:43.750 Write Zeroes Command: Not Supported 00:22:43.750 Set Features Save Field: Not Supported 00:22:43.750 Reservations: Not Supported 00:22:43.750 Timestamp: Not Supported 00:22:43.750 Copy: Not Supported 00:22:43.750 Volatile Write Cache: Not Present 00:22:43.750 Atomic Write Unit (Normal): 1 00:22:43.750 Atomic Write Unit (PFail): 1 00:22:43.750 Atomic Compare & Write Unit: 1 00:22:43.750 Fused Compare & Write: Supported 00:22:43.750 Scatter-Gather List 00:22:43.750 SGL Command Set: Supported 00:22:43.750 SGL Keyed: Supported 00:22:43.750 SGL Bit Bucket Descriptor: Not Supported 00:22:43.750 SGL Metadata Pointer: Not Supported 00:22:43.750 Oversized SGL: Not Supported 00:22:43.750 SGL Metadata Address: Not Supported 00:22:43.750 SGL Offset: Supported 00:22:43.750 Transport SGL Data Block: Not Supported 00:22:43.750 Replay Protected Memory Block: Not Supported 00:22:43.750 00:22:43.750 Firmware Slot Information 00:22:43.750 ========================= 00:22:43.750 Active slot: 0 00:22:43.750 00:22:43.750 00:22:43.750 Error Log 00:22:43.750 ========= 00:22:43.750 00:22:43.750 Active Namespaces 00:22:43.750 ================= 00:22:43.750 Discovery Log Page 00:22:43.750 ================== 00:22:43.750 Generation Counter: 2 00:22:43.750 Number of Records: 2 00:22:43.750 Record Format: 0 00:22:43.750 00:22:43.750 Discovery Log Entry 0 00:22:43.750 ---------------------- 00:22:43.750 Transport Type: 3 (TCP) 00:22:43.750 Address Family: 1 (IPv4) 00:22:43.750 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:43.750 Entry Flags: 00:22:43.750 Duplicate Returned Information: 1 00:22:43.750 Explicit Persistent Connection Support for Discovery: 1 00:22:43.750 Transport Requirements: 00:22:43.750 Secure Channel: Not Required 00:22:43.750 Port ID: 0 (0x0000) 00:22:43.750 Controller ID: 65535 (0xffff) 00:22:43.750 Admin Max SQ Size: 128 00:22:43.750 Transport Service Identifier: 4420 00:22:43.750 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:43.750 Transport Address: 10.0.0.2 00:22:43.750 Discovery Log Entry 1 00:22:43.750 ---------------------- 00:22:43.750 Transport Type: 3 (TCP) 00:22:43.750 Address Family: 1 (IPv4) 00:22:43.750 Subsystem Type: 2 (NVM Subsystem) 00:22:43.750 Entry Flags: 00:22:43.750 Duplicate Returned Information: 0 00:22:43.750 Explicit Persistent Connection Support for Discovery: 0 00:22:43.750 Transport Requirements: 00:22:43.750 Secure Channel: Not Required 00:22:43.750 Port ID: 0 (0x0000) 00:22:43.750 Controller ID: 65535 (0xffff) 00:22:43.750 Admin Max SQ Size: 128 00:22:43.750 Transport Service Identifier: 4420 00:22:43.750 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:43.750 Transport Address: 10.0.0.2 [2024-11-19 11:33:57.219169] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:43.750 [2024-11-19 11:33:57.219180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a100) on tqpair=0x1218690 00:22:43.750 [2024-11-19 11:33:57.219186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.750 [2024-11-19 11:33:57.219190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a280) on tqpair=0x1218690 00:22:43.750 [2024-11-19 11:33:57.219195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.750 [2024-11-19 11:33:57.219199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a400) on tqpair=0x1218690 00:22:43.750 [2024-11-19 11:33:57.219203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.750 [2024-11-19 11:33:57.219207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.750 [2024-11-19 11:33:57.219211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.750 [2024-11-19 11:33:57.219221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.750 [2024-11-19 11:33:57.219225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.750 [2024-11-19 11:33:57.219228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.750 [2024-11-19 11:33:57.219234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.750 [2024-11-19 11:33:57.219248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.750 [2024-11-19 11:33:57.219311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.750 [2024-11-19 11:33:57.219316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.750 [2024-11-19 11:33:57.219319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.219329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.219341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.751 [2024-11-19 11:33:57.219353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.751 [2024-11-19 11:33:57.219427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.751 [2024-11-19 11:33:57.219432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.751 [2024-11-19 11:33:57.219435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.219443] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:43.751 [2024-11-19 11:33:57.219447] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:43.751 [2024-11-19 11:33:57.219456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.219470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.751 [2024-11-19 11:33:57.219480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.751 [2024-11-19 11:33:57.219543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.751 [2024-11-19 11:33:57.219549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.751 [2024-11-19 11:33:57.219552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.219564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.219576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.751 [2024-11-19 11:33:57.219585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.751 [2024-11-19 11:33:57.219643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.751 [2024-11-19 11:33:57.219648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.751 [2024-11-19 11:33:57.219651] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.219662] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.219674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.751 [2024-11-19 11:33:57.219684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.751 [2024-11-19 11:33:57.219762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.751 [2024-11-19 11:33:57.219768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.751 [2024-11-19 11:33:57.219771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.219782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.219794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.751 [2024-11-19 11:33:57.219803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.751 [2024-11-19 11:33:57.219862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.751 [2024-11-19 11:33:57.219867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.751 [2024-11-19 11:33:57.219870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.219882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.219895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.751 [2024-11-19 11:33:57.219907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.751 [2024-11-19 11:33:57.219972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.751 [2024-11-19 11:33:57.219978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.751 [2024-11-19 11:33:57.219981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.219993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.219999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.220005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.751 [2024-11-19 11:33:57.220015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.751 [2024-11-19 11:33:57.220078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.751 [2024-11-19 11:33:57.220084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.751 [2024-11-19 11:33:57.220086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.220098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.220110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.751 [2024-11-19 11:33:57.220120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.751 [2024-11-19 11:33:57.220197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.751 [2024-11-19 11:33:57.220203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.751 [2024-11-19 11:33:57.220206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.220217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.220230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.751 [2024-11-19 11:33:57.220239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.751 [2024-11-19 11:33:57.220314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.751 [2024-11-19 11:33:57.220320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.751 [2024-11-19 11:33:57.220323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.220334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.220346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.751 [2024-11-19 11:33:57.220357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.751 [2024-11-19 11:33:57.220420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.751 [2024-11-19 11:33:57.220426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.751 [2024-11-19 11:33:57.220429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.220441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.220453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.751 [2024-11-19 11:33:57.220462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.751 [2024-11-19 11:33:57.220524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.751 [2024-11-19 11:33:57.220529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.751 [2024-11-19 11:33:57.220532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.751 [2024-11-19 11:33:57.220544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.751 [2024-11-19 11:33:57.220550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.751 [2024-11-19 11:33:57.220556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.752 [2024-11-19 11:33:57.220565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.752 [2024-11-19 11:33:57.220625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.752 [2024-11-19 11:33:57.220630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.752 [2024-11-19 11:33:57.220633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.220636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.752 [2024-11-19 11:33:57.220644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.220648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.220651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.752 [2024-11-19 11:33:57.220656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.752 [2024-11-19 11:33:57.220665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.752 [2024-11-19 11:33:57.220743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.752 [2024-11-19 11:33:57.220749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.752 [2024-11-19 11:33:57.220752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.220755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.752 [2024-11-19 11:33:57.220763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.220767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.220770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.752 [2024-11-19 11:33:57.220775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.752 [2024-11-19 11:33:57.220784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.752 [2024-11-19 11:33:57.220844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.752 [2024-11-19 11:33:57.220850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.752 [2024-11-19 11:33:57.220853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.220856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.752 [2024-11-19 11:33:57.220865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.220868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.220871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.752 [2024-11-19 11:33:57.220877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.752 [2024-11-19 11:33:57.220886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.752 [2024-11-19 11:33:57.224959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.752 [2024-11-19 11:33:57.224967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.752 [2024-11-19 11:33:57.224970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.224973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.752 [2024-11-19 11:33:57.224983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.224986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.224989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1218690) 00:22:43.752 [2024-11-19 11:33:57.224995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.752 [2024-11-19 11:33:57.225006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x127a580, cid 3, qid 0 00:22:43.752 [2024-11-19 11:33:57.225138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.752 [2024-11-19 11:33:57.225143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.752 [2024-11-19 11:33:57.225146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.225150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x127a580) on tqpair=0x1218690 00:22:43.752 [2024-11-19 11:33:57.225156] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:22:43.752 00:22:43.752 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:43.752 [2024-11-19 11:33:57.260862] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:43.752 [2024-11-19 11:33:57.260909] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343899 ] 00:22:43.752 [2024-11-19 11:33:57.298708] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:43.752 [2024-11-19 11:33:57.298751] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:43.752 [2024-11-19 11:33:57.298755] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:43.752 [2024-11-19 11:33:57.298767] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:43.752 [2024-11-19 11:33:57.298775] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:43.752 [2024-11-19 11:33:57.306131] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:43.752 [2024-11-19 11:33:57.306163] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xff7690 0 00:22:43.752 [2024-11-19 11:33:57.312959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:43.752 [2024-11-19 11:33:57.312973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:43.752 [2024-11-19 11:33:57.312978] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:43.752 [2024-11-19 11:33:57.312980] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:43.752 [2024-11-19 11:33:57.313007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.313012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.313015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xff7690) 00:22:43.752 [2024-11-19 11:33:57.313026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:43.752 [2024-11-19 11:33:57.313043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059100, cid 0, qid 0 00:22:43.752 [2024-11-19 11:33:57.319958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.752 [2024-11-19 11:33:57.319968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.752 [2024-11-19 11:33:57.319972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.319976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059100) on tqpair=0xff7690 00:22:43.752 [2024-11-19 11:33:57.319984] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:43.752 [2024-11-19 11:33:57.319990] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:43.752 [2024-11-19 11:33:57.319995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:43.752 [2024-11-19 11:33:57.320006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.320009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.320013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xff7690) 00:22:43.752 [2024-11-19 11:33:57.320022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.752 [2024-11-19 11:33:57.320036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059100, cid 0, qid 0 00:22:43.752 [2024-11-19 11:33:57.320193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.752 [2024-11-19 11:33:57.320199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.752 [2024-11-19 11:33:57.320202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.320206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059100) on tqpair=0xff7690 00:22:43.752 [2024-11-19 11:33:57.320211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:43.752 [2024-11-19 11:33:57.320217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:43.752 [2024-11-19 11:33:57.320224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.320227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.320231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xff7690) 00:22:43.752 [2024-11-19 11:33:57.320237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.752 [2024-11-19 11:33:57.320247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059100, cid 0, qid 0 00:22:43.752 [2024-11-19 11:33:57.320309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.752 [2024-11-19 11:33:57.320315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.752 [2024-11-19 11:33:57.320321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.320324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059100) on tqpair=0xff7690 00:22:43.752 [2024-11-19 11:33:57.320329] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:43.752 [2024-11-19 11:33:57.320336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:43.752 [2024-11-19 11:33:57.320341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.320345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.320348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xff7690) 00:22:43.752 [2024-11-19 11:33:57.320353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.752 [2024-11-19 11:33:57.320365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059100, cid 0, qid 0 00:22:43.752 [2024-11-19 11:33:57.320429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.752 [2024-11-19 11:33:57.320435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.752 [2024-11-19 11:33:57.320438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.752 [2024-11-19 11:33:57.320441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059100) on tqpair=0xff7690 00:22:43.753 [2024-11-19 11:33:57.320445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:43.753 [2024-11-19 11:33:57.320454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.320457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.320460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xff7690) 00:22:43.753 [2024-11-19 11:33:57.320466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.753 [2024-11-19 11:33:57.320475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059100, cid 0, qid 0 00:22:43.753 [2024-11-19 11:33:57.320548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.753 [2024-11-19 11:33:57.320553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.753 [2024-11-19 11:33:57.320556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.320560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059100) on tqpair=0xff7690 00:22:43.753 [2024-11-19 11:33:57.320563] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:43.753 [2024-11-19 11:33:57.320567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:43.753 [2024-11-19 11:33:57.320574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:43.753 [2024-11-19 11:33:57.320682] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:43.753 [2024-11-19 11:33:57.320686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:43.753 [2024-11-19 11:33:57.320693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.320696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.320699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xff7690) 00:22:43.753 [2024-11-19 11:33:57.320705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.753 [2024-11-19 11:33:57.320715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059100, cid 0, qid 0 00:22:43.753 [2024-11-19 11:33:57.320779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.753 [2024-11-19 11:33:57.320784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.753 [2024-11-19 11:33:57.320787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.320791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059100) on tqpair=0xff7690 00:22:43.753 [2024-11-19 11:33:57.320795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:43.753 [2024-11-19 11:33:57.320803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.320806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.320809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xff7690) 00:22:43.753 [2024-11-19 11:33:57.320815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.753 [2024-11-19 11:33:57.320824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059100, cid 0, qid 0 00:22:43.753 [2024-11-19 11:33:57.320897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.753 [2024-11-19 11:33:57.320902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.753 [2024-11-19 11:33:57.320905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.320909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059100) on tqpair=0xff7690 00:22:43.753 [2024-11-19 11:33:57.320912] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:43.753 [2024-11-19 11:33:57.320916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:43.753 [2024-11-19 11:33:57.320923] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:43.753 [2024-11-19 11:33:57.320932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:43.753 [2024-11-19 11:33:57.320940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.320943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xff7690) 00:22:43.753 [2024-11-19 11:33:57.320955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.753 [2024-11-19 11:33:57.320966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059100, cid 0, qid 0 00:22:43.753 [2024-11-19 11:33:57.321061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.753 [2024-11-19 11:33:57.321067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.753 [2024-11-19 11:33:57.321070] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321074] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xff7690): datao=0, datal=4096, cccid=0 00:22:43.753 [2024-11-19 11:33:57.321078] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1059100) on tqpair(0xff7690): expected_datao=0, payload_size=4096 00:22:43.753 [2024-11-19 11:33:57.321081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321088] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321091] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.753 [2024-11-19 11:33:57.321110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.753 [2024-11-19 11:33:57.321113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059100) on tqpair=0xff7690 00:22:43.753 [2024-11-19 11:33:57.321125] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:43.753 [2024-11-19 11:33:57.321130] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:43.753 [2024-11-19 11:33:57.321134] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:43.753 [2024-11-19 11:33:57.321139] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:43.753 [2024-11-19 11:33:57.321144] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:43.753 [2024-11-19 11:33:57.321148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:43.753 [2024-11-19 11:33:57.321156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:43.753 [2024-11-19 11:33:57.321162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xff7690) 00:22:43.753 [2024-11-19 11:33:57.321174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:43.753 [2024-11-19 11:33:57.321185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059100, cid 0, qid 0 00:22:43.753 [2024-11-19 11:33:57.321247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.753 [2024-11-19 11:33:57.321253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.753 [2024-11-19 11:33:57.321256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059100) on tqpair=0xff7690 00:22:43.753 [2024-11-19 11:33:57.321264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xff7690) 00:22:43.753 [2024-11-19 11:33:57.321276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.753 [2024-11-19 11:33:57.321281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xff7690) 00:22:43.753 [2024-11-19 11:33:57.321292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.753 [2024-11-19 11:33:57.321297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xff7690) 00:22:43.753 [2024-11-19 11:33:57.321308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.753 [2024-11-19 11:33:57.321313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.753 [2024-11-19 11:33:57.321324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.753 [2024-11-19 11:33:57.321329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:43.753 [2024-11-19 11:33:57.321338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:43.753 [2024-11-19 11:33:57.321344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.753 [2024-11-19 11:33:57.321347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xff7690) 00:22:43.753 [2024-11-19 11:33:57.321353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.753 [2024-11-19 11:33:57.321363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059100, cid 0, qid 0 00:22:43.753 [2024-11-19 11:33:57.321368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059280, cid 1, qid 0 00:22:43.753 [2024-11-19 11:33:57.321372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059400, cid 2, qid 0 00:22:43.753 [2024-11-19 11:33:57.321376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.754 [2024-11-19 11:33:57.321380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059700, cid 4, qid 0 00:22:43.754 [2024-11-19 11:33:57.321476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.754 [2024-11-19 11:33:57.321481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.754 [2024-11-19 11:33:57.321484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.321487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059700) on tqpair=0xff7690 00:22:43.754 [2024-11-19 11:33:57.321493] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:43.754 [2024-11-19 11:33:57.321498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.321506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.321512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.321518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.321521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.321524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xff7690) 00:22:43.754 [2024-11-19 11:33:57.321529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:43.754 [2024-11-19 11:33:57.321539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059700, cid 4, qid 0 00:22:43.754 [2024-11-19 11:33:57.321603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.754 [2024-11-19 11:33:57.321608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.754 [2024-11-19 11:33:57.321612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.321615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059700) on tqpair=0xff7690 00:22:43.754 [2024-11-19 11:33:57.321666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.321676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.321682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.321685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xff7690) 00:22:43.754 [2024-11-19 11:33:57.321691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.754 [2024-11-19 11:33:57.321703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059700, cid 4, qid 0 00:22:43.754 [2024-11-19 11:33:57.321777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.754 [2024-11-19 11:33:57.321783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.754 [2024-11-19 11:33:57.321786] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.321789] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xff7690): datao=0, datal=4096, cccid=4 00:22:43.754 [2024-11-19 11:33:57.321792] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1059700) on tqpair(0xff7690): expected_datao=0, payload_size=4096 00:22:43.754 [2024-11-19 11:33:57.321796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.321822] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.321825] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.321862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.754 [2024-11-19 11:33:57.321868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.754 [2024-11-19 11:33:57.321871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.321874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059700) on tqpair=0xff7690 00:22:43.754 [2024-11-19 11:33:57.321882] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:43.754 [2024-11-19 11:33:57.321893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.321901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.321907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.321911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xff7690) 00:22:43.754 [2024-11-19 11:33:57.321916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.754 [2024-11-19 11:33:57.321927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059700, cid 4, qid 0 00:22:43.754 [2024-11-19 11:33:57.322009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.754 [2024-11-19 11:33:57.322015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.754 [2024-11-19 11:33:57.322018] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322021] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xff7690): datao=0, datal=4096, cccid=4 00:22:43.754 [2024-11-19 11:33:57.322025] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1059700) on tqpair(0xff7690): expected_datao=0, payload_size=4096 00:22:43.754 [2024-11-19 11:33:57.322029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322040] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322043] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.754 [2024-11-19 11:33:57.322075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.754 [2024-11-19 11:33:57.322078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059700) on tqpair=0xff7690 00:22:43.754 [2024-11-19 11:33:57.322092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.322101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.322107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xff7690) 00:22:43.754 [2024-11-19 11:33:57.322118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.754 [2024-11-19 11:33:57.322129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059700, cid 4, qid 0 00:22:43.754 [2024-11-19 11:33:57.322197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.754 [2024-11-19 11:33:57.322203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.754 [2024-11-19 11:33:57.322206] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322208] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xff7690): datao=0, datal=4096, cccid=4 00:22:43.754 [2024-11-19 11:33:57.322212] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1059700) on tqpair(0xff7690): expected_datao=0, payload_size=4096 00:22:43.754 [2024-11-19 11:33:57.322216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322226] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322230] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.754 [2024-11-19 11:33:57.322266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.754 [2024-11-19 11:33:57.322269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059700) on tqpair=0xff7690 00:22:43.754 [2024-11-19 11:33:57.322279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.322286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.322293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.322299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.322304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.322309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.322313] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:43.754 [2024-11-19 11:33:57.322317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:43.754 [2024-11-19 11:33:57.322322] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:43.754 [2024-11-19 11:33:57.322335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.754 [2024-11-19 11:33:57.322339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xff7690) 00:22:43.755 [2024-11-19 11:33:57.322344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.755 [2024-11-19 11:33:57.322350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xff7690) 00:22:43.755 [2024-11-19 11:33:57.322362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.755 [2024-11-19 11:33:57.322374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059700, cid 4, qid 0 00:22:43.755 [2024-11-19 11:33:57.322381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059880, cid 5, qid 0 00:22:43.755 [2024-11-19 11:33:57.322466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.755 [2024-11-19 11:33:57.322472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.755 [2024-11-19 11:33:57.322474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059700) on tqpair=0xff7690 00:22:43.755 [2024-11-19 11:33:57.322483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.755 [2024-11-19 11:33:57.322488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.755 [2024-11-19 11:33:57.322491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059880) on tqpair=0xff7690 00:22:43.755 [2024-11-19 11:33:57.322502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xff7690) 00:22:43.755 [2024-11-19 11:33:57.322512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.755 [2024-11-19 11:33:57.322522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059880, cid 5, qid 0 00:22:43.755 [2024-11-19 11:33:57.322587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.755 [2024-11-19 11:33:57.322592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.755 [2024-11-19 11:33:57.322595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059880) on tqpair=0xff7690 00:22:43.755 [2024-11-19 11:33:57.322606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xff7690) 00:22:43.755 [2024-11-19 11:33:57.322615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.755 [2024-11-19 11:33:57.322625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059880, cid 5, qid 0 00:22:43.755 [2024-11-19 11:33:57.322688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.755 [2024-11-19 11:33:57.322694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.755 [2024-11-19 11:33:57.322697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059880) on tqpair=0xff7690 00:22:43.755 [2024-11-19 11:33:57.322708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xff7690) 00:22:43.755 [2024-11-19 11:33:57.322717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.755 [2024-11-19 11:33:57.322726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059880, cid 5, qid 0 00:22:43.755 [2024-11-19 11:33:57.322789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.755 [2024-11-19 11:33:57.322794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.755 [2024-11-19 11:33:57.322797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059880) on tqpair=0xff7690 00:22:43.755 [2024-11-19 11:33:57.322813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xff7690) 00:22:43.755 [2024-11-19 11:33:57.322823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.755 [2024-11-19 11:33:57.322832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xff7690) 00:22:43.755 [2024-11-19 11:33:57.322841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.755 [2024-11-19 11:33:57.322847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xff7690) 00:22:43.755 [2024-11-19 11:33:57.322856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.755 [2024-11-19 11:33:57.322862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.322865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xff7690) 00:22:43.755 [2024-11-19 11:33:57.322870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.755 [2024-11-19 11:33:57.322881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059880, cid 5, qid 0 00:22:43.755 [2024-11-19 11:33:57.322886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059700, cid 4, qid 0 00:22:43.755 [2024-11-19 11:33:57.322890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059a00, cid 6, qid 0 00:22:43.755 [2024-11-19 11:33:57.322894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059b80, cid 7, qid 0 00:22:43.755 [2024-11-19 11:33:57.323046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.755 [2024-11-19 11:33:57.323053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.755 [2024-11-19 11:33:57.323056] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323059] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xff7690): datao=0, datal=8192, cccid=5 00:22:43.755 [2024-11-19 11:33:57.323063] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1059880) on tqpair(0xff7690): expected_datao=0, payload_size=8192 00:22:43.755 [2024-11-19 11:33:57.323067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323078] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323082] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.755 [2024-11-19 11:33:57.323094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.755 [2024-11-19 11:33:57.323097] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323101] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xff7690): datao=0, datal=512, cccid=4 00:22:43.755 [2024-11-19 11:33:57.323104] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1059700) on tqpair(0xff7690): expected_datao=0, payload_size=512 00:22:43.755 [2024-11-19 11:33:57.323108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323113] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323116] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.755 [2024-11-19 11:33:57.323126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.755 [2024-11-19 11:33:57.323129] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323132] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xff7690): datao=0, datal=512, cccid=6 00:22:43.755 [2024-11-19 11:33:57.323136] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1059a00) on tqpair(0xff7690): expected_datao=0, payload_size=512 00:22:43.755 [2024-11-19 11:33:57.323142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323147] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323150] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.755 [2024-11-19 11:33:57.323159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.755 [2024-11-19 11:33:57.323162] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323165] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xff7690): datao=0, datal=4096, cccid=7 00:22:43.755 [2024-11-19 11:33:57.323169] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1059b80) on tqpair(0xff7690): expected_datao=0, payload_size=4096 00:22:43.755 [2024-11-19 11:33:57.323173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323178] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323181] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.755 [2024-11-19 11:33:57.323193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.755 [2024-11-19 11:33:57.323196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059880) on tqpair=0xff7690 00:22:43.755 [2024-11-19 11:33:57.323210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.755 [2024-11-19 11:33:57.323215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.755 [2024-11-19 11:33:57.323218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059700) on tqpair=0xff7690 00:22:43.755 [2024-11-19 11:33:57.323229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.755 [2024-11-19 11:33:57.323235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.755 [2024-11-19 11:33:57.323238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059a00) on tqpair=0xff7690 00:22:43.755 [2024-11-19 11:33:57.323247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.755 [2024-11-19 11:33:57.323252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.755 [2024-11-19 11:33:57.323255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.755 [2024-11-19 11:33:57.323258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059b80) on tqpair=0xff7690 00:22:43.755 ===================================================== 00:22:43.755 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.755 ===================================================== 00:22:43.755 Controller Capabilities/Features 00:22:43.755 ================================ 00:22:43.756 Vendor ID: 8086 00:22:43.756 Subsystem Vendor ID: 8086 00:22:43.756 Serial Number: SPDK00000000000001 00:22:43.756 Model Number: SPDK bdev Controller 00:22:43.756 Firmware Version: 25.01 00:22:43.756 Recommended Arb Burst: 6 00:22:43.756 IEEE OUI Identifier: e4 d2 5c 00:22:43.756 Multi-path I/O 00:22:43.756 May have multiple subsystem ports: Yes 00:22:43.756 May have multiple controllers: Yes 00:22:43.756 Associated with SR-IOV VF: No 00:22:43.756 Max Data Transfer Size: 131072 00:22:43.756 Max Number of Namespaces: 32 00:22:43.756 Max Number of I/O Queues: 127 00:22:43.756 NVMe Specification Version (VS): 1.3 00:22:43.756 NVMe Specification Version (Identify): 1.3 00:22:43.756 Maximum Queue Entries: 128 00:22:43.756 Contiguous Queues Required: Yes 00:22:43.756 Arbitration Mechanisms Supported 00:22:43.756 Weighted Round Robin: Not Supported 00:22:43.756 Vendor Specific: Not Supported 00:22:43.756 Reset Timeout: 15000 ms 00:22:43.756 Doorbell Stride: 4 bytes 00:22:43.756 NVM Subsystem Reset: Not Supported 00:22:43.756 Command Sets Supported 00:22:43.756 NVM Command Set: Supported 00:22:43.756 Boot Partition: Not Supported 00:22:43.756 Memory Page Size Minimum: 4096 bytes 00:22:43.756 Memory Page Size Maximum: 4096 bytes 00:22:43.756 Persistent Memory Region: Not Supported 00:22:43.756 Optional Asynchronous Events Supported 00:22:43.756 Namespace Attribute Notices: Supported 00:22:43.756 Firmware Activation Notices: Not Supported 00:22:43.756 ANA Change Notices: Not Supported 00:22:43.756 PLE Aggregate Log Change Notices: Not Supported 00:22:43.756 LBA Status Info Alert Notices: Not Supported 00:22:43.756 EGE Aggregate Log Change Notices: Not Supported 00:22:43.756 Normal NVM Subsystem Shutdown event: Not Supported 00:22:43.756 Zone Descriptor Change Notices: Not Supported 00:22:43.756 Discovery Log Change Notices: Not Supported 00:22:43.756 Controller Attributes 00:22:43.756 128-bit Host Identifier: Supported 00:22:43.756 Non-Operational Permissive Mode: Not Supported 00:22:43.756 NVM Sets: Not Supported 00:22:43.756 Read Recovery Levels: Not Supported 00:22:43.756 Endurance Groups: Not Supported 00:22:43.756 Predictable Latency Mode: Not Supported 00:22:43.756 Traffic Based Keep ALive: Not Supported 00:22:43.756 Namespace Granularity: Not Supported 00:22:43.756 SQ Associations: Not Supported 00:22:43.756 UUID List: Not Supported 00:22:43.756 Multi-Domain Subsystem: Not Supported 00:22:43.756 Fixed Capacity Management: Not Supported 00:22:43.756 Variable Capacity Management: Not Supported 00:22:43.756 Delete Endurance Group: Not Supported 00:22:43.756 Delete NVM Set: Not Supported 00:22:43.756 Extended LBA Formats Supported: Not Supported 00:22:43.756 Flexible Data Placement Supported: Not Supported 00:22:43.756 00:22:43.756 Controller Memory Buffer Support 00:22:43.756 ================================ 00:22:43.756 Supported: No 00:22:43.756 00:22:43.756 Persistent Memory Region Support 00:22:43.756 ================================ 00:22:43.756 Supported: No 00:22:43.756 00:22:43.756 Admin Command Set Attributes 00:22:43.756 ============================ 00:22:43.756 Security Send/Receive: Not Supported 00:22:43.756 Format NVM: Not Supported 00:22:43.756 Firmware Activate/Download: Not Supported 00:22:43.756 Namespace Management: Not Supported 00:22:43.756 Device Self-Test: Not Supported 00:22:43.756 Directives: Not Supported 00:22:43.756 NVMe-MI: Not Supported 00:22:43.756 Virtualization Management: Not Supported 00:22:43.756 Doorbell Buffer Config: Not Supported 00:22:43.756 Get LBA Status Capability: Not Supported 00:22:43.756 Command & Feature Lockdown Capability: Not Supported 00:22:43.756 Abort Command Limit: 4 00:22:43.756 Async Event Request Limit: 4 00:22:43.756 Number of Firmware Slots: N/A 00:22:43.756 Firmware Slot 1 Read-Only: N/A 00:22:43.756 Firmware Activation Without Reset: N/A 00:22:43.756 Multiple Update Detection Support: N/A 00:22:43.756 Firmware Update Granularity: No Information Provided 00:22:43.756 Per-Namespace SMART Log: No 00:22:43.756 Asymmetric Namespace Access Log Page: Not Supported 00:22:43.756 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:43.756 Command Effects Log Page: Supported 00:22:43.756 Get Log Page Extended Data: Supported 00:22:43.756 Telemetry Log Pages: Not Supported 00:22:43.756 Persistent Event Log Pages: Not Supported 00:22:43.756 Supported Log Pages Log Page: May Support 00:22:43.756 Commands Supported & Effects Log Page: Not Supported 00:22:43.756 Feature Identifiers & Effects Log Page:May Support 00:22:43.756 NVMe-MI Commands & Effects Log Page: May Support 00:22:43.756 Data Area 4 for Telemetry Log: Not Supported 00:22:43.756 Error Log Page Entries Supported: 128 00:22:43.756 Keep Alive: Supported 00:22:43.756 Keep Alive Granularity: 10000 ms 00:22:43.756 00:22:43.756 NVM Command Set Attributes 00:22:43.756 ========================== 00:22:43.756 Submission Queue Entry Size 00:22:43.756 Max: 64 00:22:43.756 Min: 64 00:22:43.756 Completion Queue Entry Size 00:22:43.756 Max: 16 00:22:43.756 Min: 16 00:22:43.756 Number of Namespaces: 32 00:22:43.756 Compare Command: Supported 00:22:43.756 Write Uncorrectable Command: Not Supported 00:22:43.756 Dataset Management Command: Supported 00:22:43.756 Write Zeroes Command: Supported 00:22:43.756 Set Features Save Field: Not Supported 00:22:43.756 Reservations: Supported 00:22:43.756 Timestamp: Not Supported 00:22:43.756 Copy: Supported 00:22:43.756 Volatile Write Cache: Present 00:22:43.756 Atomic Write Unit (Normal): 1 00:22:43.756 Atomic Write Unit (PFail): 1 00:22:43.756 Atomic Compare & Write Unit: 1 00:22:43.756 Fused Compare & Write: Supported 00:22:43.756 Scatter-Gather List 00:22:43.756 SGL Command Set: Supported 00:22:43.756 SGL Keyed: Supported 00:22:43.756 SGL Bit Bucket Descriptor: Not Supported 00:22:43.756 SGL Metadata Pointer: Not Supported 00:22:43.756 Oversized SGL: Not Supported 00:22:43.756 SGL Metadata Address: Not Supported 00:22:43.756 SGL Offset: Supported 00:22:43.756 Transport SGL Data Block: Not Supported 00:22:43.756 Replay Protected Memory Block: Not Supported 00:22:43.756 00:22:43.756 Firmware Slot Information 00:22:43.756 ========================= 00:22:43.756 Active slot: 1 00:22:43.756 Slot 1 Firmware Revision: 25.01 00:22:43.756 00:22:43.756 00:22:43.756 Commands Supported and Effects 00:22:43.756 ============================== 00:22:43.756 Admin Commands 00:22:43.756 -------------- 00:22:43.756 Get Log Page (02h): Supported 00:22:43.756 Identify (06h): Supported 00:22:43.756 Abort (08h): Supported 00:22:43.756 Set Features (09h): Supported 00:22:43.756 Get Features (0Ah): Supported 00:22:43.756 Asynchronous Event Request (0Ch): Supported 00:22:43.756 Keep Alive (18h): Supported 00:22:43.756 I/O Commands 00:22:43.756 ------------ 00:22:43.756 Flush (00h): Supported LBA-Change 00:22:43.756 Write (01h): Supported LBA-Change 00:22:43.756 Read (02h): Supported 00:22:43.756 Compare (05h): Supported 00:22:43.756 Write Zeroes (08h): Supported LBA-Change 00:22:43.756 Dataset Management (09h): Supported LBA-Change 00:22:43.756 Copy (19h): Supported LBA-Change 00:22:43.756 00:22:43.756 Error Log 00:22:43.756 ========= 00:22:43.756 00:22:43.756 Arbitration 00:22:43.756 =========== 00:22:43.756 Arbitration Burst: 1 00:22:43.756 00:22:43.756 Power Management 00:22:43.756 ================ 00:22:43.756 Number of Power States: 1 00:22:43.756 Current Power State: Power State #0 00:22:43.756 Power State #0: 00:22:43.756 Max Power: 0.00 W 00:22:43.756 Non-Operational State: Operational 00:22:43.756 Entry Latency: Not Reported 00:22:43.756 Exit Latency: Not Reported 00:22:43.756 Relative Read Throughput: 0 00:22:43.756 Relative Read Latency: 0 00:22:43.756 Relative Write Throughput: 0 00:22:43.756 Relative Write Latency: 0 00:22:43.756 Idle Power: Not Reported 00:22:43.756 Active Power: Not Reported 00:22:43.756 Non-Operational Permissive Mode: Not Supported 00:22:43.756 00:22:43.756 Health Information 00:22:43.756 ================== 00:22:43.756 Critical Warnings: 00:22:43.756 Available Spare Space: OK 00:22:43.756 Temperature: OK 00:22:43.756 Device Reliability: OK 00:22:43.756 Read Only: No 00:22:43.756 Volatile Memory Backup: OK 00:22:43.756 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:43.756 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:43.756 Available Spare: 0% 00:22:43.756 Available Spare Threshold: 0% 00:22:43.756 Life Percentage Used:[2024-11-19 11:33:57.323342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.756 [2024-11-19 11:33:57.323347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xff7690) 00:22:43.757 [2024-11-19 11:33:57.323353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.757 [2024-11-19 11:33:57.323365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059b80, cid 7, qid 0 00:22:43.757 [2024-11-19 11:33:57.323441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.757 [2024-11-19 11:33:57.323448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.757 [2024-11-19 11:33:57.323452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059b80) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.323485] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:43.757 [2024-11-19 11:33:57.323495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059100) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.323501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.757 [2024-11-19 11:33:57.323508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059280) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.323512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.757 [2024-11-19 11:33:57.323517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059400) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.323520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.757 [2024-11-19 11:33:57.323525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.323529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.757 [2024-11-19 11:33:57.323536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.757 [2024-11-19 11:33:57.323548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.757 [2024-11-19 11:33:57.323559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.757 [2024-11-19 11:33:57.323633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.757 [2024-11-19 11:33:57.323639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.757 [2024-11-19 11:33:57.323642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.323651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.757 [2024-11-19 11:33:57.323663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.757 [2024-11-19 11:33:57.323675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.757 [2024-11-19 11:33:57.323784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.757 [2024-11-19 11:33:57.323790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.757 [2024-11-19 11:33:57.323793] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.323800] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:43.757 [2024-11-19 11:33:57.323804] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:43.757 [2024-11-19 11:33:57.323812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.757 [2024-11-19 11:33:57.323824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.757 [2024-11-19 11:33:57.323833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.757 [2024-11-19 11:33:57.323935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.757 [2024-11-19 11:33:57.323940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.757 [2024-11-19 11:33:57.323943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.323964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.323971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.757 [2024-11-19 11:33:57.323976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.757 [2024-11-19 11:33:57.323986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.757 [2024-11-19 11:33:57.324056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.757 [2024-11-19 11:33:57.324062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.757 [2024-11-19 11:33:57.324065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.324077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.757 [2024-11-19 11:33:57.324092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.757 [2024-11-19 11:33:57.324102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.757 [2024-11-19 11:33:57.324187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.757 [2024-11-19 11:33:57.324193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.757 [2024-11-19 11:33:57.324196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.324207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.757 [2024-11-19 11:33:57.324219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.757 [2024-11-19 11:33:57.324229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.757 [2024-11-19 11:33:57.324337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.757 [2024-11-19 11:33:57.324343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.757 [2024-11-19 11:33:57.324346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.324357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.757 [2024-11-19 11:33:57.324369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.757 [2024-11-19 11:33:57.324378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.757 [2024-11-19 11:33:57.324489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.757 [2024-11-19 11:33:57.324495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.757 [2024-11-19 11:33:57.324498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.324511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.757 [2024-11-19 11:33:57.324524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.757 [2024-11-19 11:33:57.324534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.757 [2024-11-19 11:33:57.324598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.757 [2024-11-19 11:33:57.324604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.757 [2024-11-19 11:33:57.324607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.324618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.757 [2024-11-19 11:33:57.324630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.757 [2024-11-19 11:33:57.324639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.757 [2024-11-19 11:33:57.324740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.757 [2024-11-19 11:33:57.324746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.757 [2024-11-19 11:33:57.324749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.757 [2024-11-19 11:33:57.324761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.757 [2024-11-19 11:33:57.324768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.757 [2024-11-19 11:33:57.324773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.757 [2024-11-19 11:33:57.324783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.757 [2024-11-19 11:33:57.324893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.324899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.324902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.324905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.324914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.324918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.324921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.324927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.324936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.758 [2024-11-19 11:33:57.325043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.325050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.325052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.325064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.325078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.325088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.758 [2024-11-19 11:33:57.325150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.325156] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.325159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.325170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.325182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.325191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.758 [2024-11-19 11:33:57.325296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.325301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.325304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.325315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.325328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.325337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.758 [2024-11-19 11:33:57.325446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.325453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.325456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.325467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.325482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.325492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.758 [2024-11-19 11:33:57.325598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.325604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.325609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.325622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.325636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.325647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.758 [2024-11-19 11:33:57.325706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.325712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.325717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.325730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.325742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.325752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.758 [2024-11-19 11:33:57.325850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.325857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.325860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.325871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.325877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.325883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.325892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.758 [2024-11-19 11:33:57.326001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.326008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.326011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.326022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.326034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.326044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.758 [2024-11-19 11:33:57.326153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.326159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.326162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.326174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.326186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.326197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.758 [2024-11-19 11:33:57.326256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.326262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.326265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326268] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.326276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.326288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.326297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.758 [2024-11-19 11:33:57.326403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.758 [2024-11-19 11:33:57.326409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.758 [2024-11-19 11:33:57.326412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.758 [2024-11-19 11:33:57.326423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.758 [2024-11-19 11:33:57.326429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.758 [2024-11-19 11:33:57.326435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.758 [2024-11-19 11:33:57.326444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.759 [2024-11-19 11:33:57.326555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.759 [2024-11-19 11:33:57.326561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.759 [2024-11-19 11:33:57.326563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.759 [2024-11-19 11:33:57.326567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.759 [2024-11-19 11:33:57.326575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.759 [2024-11-19 11:33:57.326578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.759 [2024-11-19 11:33:57.326581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.759 [2024-11-19 11:33:57.326587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.759 [2024-11-19 11:33:57.326596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.759 [2024-11-19 11:33:57.333956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.759 [2024-11-19 11:33:57.333966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.759 [2024-11-19 11:33:57.333969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.759 [2024-11-19 11:33:57.333972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.759 [2024-11-19 11:33:57.333982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.759 [2024-11-19 11:33:57.333985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.759 [2024-11-19 11:33:57.333988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xff7690) 00:22:43.759 [2024-11-19 11:33:57.333995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.759 [2024-11-19 11:33:57.334009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059580, cid 3, qid 0 00:22:43.759 [2024-11-19 11:33:57.334142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.759 [2024-11-19 11:33:57.334148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.759 [2024-11-19 11:33:57.334151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.759 [2024-11-19 11:33:57.334154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1059580) on tqpair=0xff7690 00:22:43.759 [2024-11-19 11:33:57.334162] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 10 milliseconds 00:22:43.759 0% 00:22:43.759 Data Units Read: 0 00:22:43.759 Data Units Written: 0 00:22:43.759 Host Read Commands: 0 00:22:43.759 Host Write Commands: 0 00:22:43.759 Controller Busy Time: 0 minutes 00:22:43.759 Power Cycles: 0 00:22:43.759 Power On Hours: 0 hours 00:22:43.759 Unsafe Shutdowns: 0 00:22:43.759 Unrecoverable Media Errors: 0 00:22:43.759 Lifetime Error Log Entries: 0 00:22:43.759 Warning Temperature Time: 0 minutes 00:22:43.759 Critical Temperature Time: 0 minutes 00:22:43.759 00:22:43.759 Number of Queues 00:22:43.759 ================ 00:22:43.759 Number of I/O Submission Queues: 127 00:22:43.759 Number of I/O Completion Queues: 127 00:22:43.759 00:22:43.759 Active Namespaces 00:22:43.759 ================= 00:22:43.759 Namespace ID:1 00:22:43.759 Error Recovery Timeout: Unlimited 00:22:43.759 Command Set Identifier: NVM (00h) 00:22:43.759 Deallocate: Supported 00:22:43.759 Deallocated/Unwritten Error: Not Supported 00:22:43.759 Deallocated Read Value: Unknown 00:22:43.759 Deallocate in Write Zeroes: Not Supported 00:22:43.759 Deallocated Guard Field: 0xFFFF 00:22:43.759 Flush: Supported 00:22:43.759 Reservation: Supported 00:22:43.759 Namespace Sharing Capabilities: Multiple Controllers 00:22:43.759 Size (in LBAs): 131072 (0GiB) 00:22:43.759 Capacity (in LBAs): 131072 (0GiB) 00:22:43.759 Utilization (in LBAs): 131072 (0GiB) 00:22:43.759 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:43.759 EUI64: ABCDEF0123456789 00:22:43.759 UUID: 01d56838-98d2-464f-be02-b3b4adcbd27f 00:22:43.759 Thin Provisioning: Not Supported 00:22:43.759 Per-NS Atomic Units: Yes 00:22:43.759 Atomic Boundary Size (Normal): 0 00:22:43.759 Atomic Boundary Size (PFail): 0 00:22:43.759 Atomic Boundary Offset: 0 00:22:43.759 Maximum Single Source Range Length: 65535 00:22:43.759 Maximum Copy Length: 65535 00:22:43.759 Maximum Source Range Count: 1 00:22:43.759 NGUID/EUI64 Never Reused: No 00:22:43.759 Namespace Write Protected: No 00:22:43.759 Number of LBA Formats: 1 00:22:43.759 Current LBA Format: LBA Format #00 00:22:43.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:43.759 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.759 rmmod nvme_tcp 00:22:43.759 rmmod nvme_fabrics 00:22:43.759 rmmod nvme_keyring 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2343866 ']' 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2343866 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2343866 ']' 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2343866 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2343866 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2343866' 00:22:43.759 killing process with pid 2343866 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2343866 00:22:43.759 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2343866 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.018 11:33:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.557 00:22:46.557 real 0m9.232s 00:22:46.557 user 0m5.288s 00:22:46.557 sys 0m4.839s 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:46.557 ************************************ 00:22:46.557 END TEST nvmf_identify 00:22:46.557 ************************************ 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.557 ************************************ 00:22:46.557 START TEST nvmf_perf 00:22:46.557 ************************************ 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:46.557 * Looking for test storage... 00:22:46.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.557 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:46.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.558 --rc genhtml_branch_coverage=1 00:22:46.558 --rc genhtml_function_coverage=1 00:22:46.558 --rc genhtml_legend=1 00:22:46.558 --rc geninfo_all_blocks=1 00:22:46.558 --rc geninfo_unexecuted_blocks=1 00:22:46.558 00:22:46.558 ' 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:46.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.558 --rc genhtml_branch_coverage=1 00:22:46.558 --rc genhtml_function_coverage=1 00:22:46.558 --rc genhtml_legend=1 00:22:46.558 --rc geninfo_all_blocks=1 00:22:46.558 --rc geninfo_unexecuted_blocks=1 00:22:46.558 00:22:46.558 ' 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:46.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.558 --rc genhtml_branch_coverage=1 00:22:46.558 --rc genhtml_function_coverage=1 00:22:46.558 --rc genhtml_legend=1 00:22:46.558 --rc geninfo_all_blocks=1 00:22:46.558 --rc geninfo_unexecuted_blocks=1 00:22:46.558 00:22:46.558 ' 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:46.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.558 --rc genhtml_branch_coverage=1 00:22:46.558 --rc genhtml_function_coverage=1 00:22:46.558 --rc genhtml_legend=1 00:22:46.558 --rc geninfo_all_blocks=1 00:22:46.558 --rc geninfo_unexecuted_blocks=1 00:22:46.558 00:22:46.558 ' 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.558 11:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.558 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:46.559 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:46.559 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:46.559 11:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.130 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:53.131 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:53.131 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:53.131 Found net devices under 0000:86:00.0: cvl_0_0 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:53.131 Found net devices under 0000:86:00.1: cvl_0_1 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:53.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:22:53.131 00:22:53.131 --- 10.0.0.2 ping statistics --- 00:22:53.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.131 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:22:53.131 00:22:53.131 --- 10.0.0.1 ping statistics --- 00:22:53.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.131 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:53.131 11:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2347433 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2347433 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2347433 ']' 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.131 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:53.131 [2024-11-19 11:34:06.061356] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:53.132 [2024-11-19 11:34:06.061398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.132 [2024-11-19 11:34:06.141176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.132 [2024-11-19 11:34:06.184157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.132 [2024-11-19 11:34:06.184198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.132 [2024-11-19 11:34:06.184205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.132 [2024-11-19 11:34:06.184212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.132 [2024-11-19 11:34:06.184216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.132 [2024-11-19 11:34:06.185825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.132 [2024-11-19 11:34:06.185942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.132 [2024-11-19 11:34:06.186053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.132 [2024-11-19 11:34:06.186054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.132 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.132 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:53.132 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.132 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.132 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:53.132 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.132 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:53.132 11:34:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:55.664 11:34:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:55.664 11:34:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:55.922 11:34:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:55.922 11:34:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:56.181 11:34:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:56.181 11:34:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:56.181 11:34:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:56.181 11:34:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:56.181 11:34:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:56.181 [2024-11-19 11:34:09.946837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.439 11:34:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.439 11:34:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:56.439 11:34:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.710 11:34:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:56.710 11:34:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:56.969 11:34:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.227 [2024-11-19 11:34:10.777967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.227 11:34:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:57.485 11:34:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:57.485 11:34:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:57.485 11:34:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:57.485 11:34:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:58.861 Initializing NVMe Controllers 00:22:58.861 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:58.861 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:58.861 Initialization complete. Launching workers. 00:22:58.861 ======================================================== 00:22:58.861 Latency(us) 00:22:58.861 Device Information : IOPS MiB/s Average min max 00:22:58.861 PCIE (0000:5e:00.0) NSID 1 from core 0: 96643.00 377.51 330.56 10.65 4596.66 00:22:58.861 ======================================================== 00:22:58.861 Total : 96643.00 377.51 330.56 10.65 4596.66 00:22:58.861 00:22:58.861 11:34:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:59.799 Initializing NVMe Controllers 00:22:59.799 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:59.799 Initialization complete. Launching workers. 00:22:59.799 ======================================================== 00:22:59.799 Latency(us) 00:22:59.799 Device Information : IOPS MiB/s Average min max 00:22:59.799 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 11170.37 107.39 45745.80 00:22:59.799 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 24481.49 7189.64 47890.51 00:22:59.799 ======================================================== 00:22:59.799 Total : 133.00 0.52 15273.80 107.39 47890.51 00:22:59.799 00:22:59.799 11:34:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:01.176 Initializing NVMe Controllers 00:23:01.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:01.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:01.176 Initialization complete. Launching workers. 00:23:01.176 ======================================================== 00:23:01.176 Latency(us) 00:23:01.176 Device Information : IOPS MiB/s Average min max 00:23:01.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10869.53 42.46 2954.80 522.33 6925.08 00:23:01.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3839.83 15.00 8383.81 6305.19 15990.37 00:23:01.176 ======================================================== 00:23:01.176 Total : 14709.37 57.46 4372.02 522.33 15990.37 00:23:01.176 00:23:01.176 11:34:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:01.176 11:34:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:01.176 11:34:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:04.465 Initializing NVMe Controllers 00:23:04.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.465 Controller IO queue size 128, less than required. 00:23:04.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.465 Controller IO queue size 128, less than required. 00:23:04.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:04.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:04.465 Initialization complete. Launching workers. 00:23:04.465 ======================================================== 00:23:04.465 Latency(us) 00:23:04.465 Device Information : IOPS MiB/s Average min max 00:23:04.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1762.65 440.66 73660.84 41741.05 133947.34 00:23:04.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 598.19 149.55 226316.89 91529.74 368835.81 00:23:04.465 ======================================================== 00:23:04.465 Total : 2360.85 590.21 112341.02 41741.05 368835.81 00:23:04.465 00:23:04.465 11:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:04.465 No valid NVMe controllers or AIO or URING devices found 00:23:04.465 Initializing NVMe Controllers 00:23:04.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.465 Controller IO queue size 128, less than required. 00:23:04.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.465 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:04.465 Controller IO queue size 128, less than required. 00:23:04.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.465 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:04.465 WARNING: Some requested NVMe devices were skipped 00:23:04.465 11:34:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:06.999 Initializing NVMe Controllers 00:23:06.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.999 Controller IO queue size 128, less than required. 00:23:06.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.999 Controller IO queue size 128, less than required. 00:23:06.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:06.999 Initialization complete. Launching workers. 00:23:06.999 00:23:06.999 ==================== 00:23:06.999 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:06.999 TCP transport: 00:23:06.999 polls: 14672 00:23:06.999 idle_polls: 11490 00:23:06.999 sock_completions: 3182 00:23:06.999 nvme_completions: 6039 00:23:06.999 submitted_requests: 9074 00:23:06.999 queued_requests: 1 00:23:06.999 00:23:06.999 ==================== 00:23:06.999 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:06.999 TCP transport: 00:23:06.999 polls: 14928 00:23:06.999 idle_polls: 11239 00:23:06.999 sock_completions: 3689 00:23:06.999 nvme_completions: 6563 00:23:06.999 submitted_requests: 9818 00:23:06.999 queued_requests: 1 00:23:06.999 ======================================================== 00:23:06.999 Latency(us) 00:23:06.999 Device Information : IOPS MiB/s Average min max 00:23:06.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1506.07 376.52 86313.10 59153.97 144090.29 00:23:06.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1636.77 409.19 79337.20 41851.41 124187.26 00:23:06.999 ======================================================== 00:23:06.999 Total : 3142.84 785.71 82680.10 41851.41 144090.29 00:23:06.999 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:06.999 rmmod nvme_tcp 00:23:06.999 rmmod nvme_fabrics 00:23:06.999 rmmod nvme_keyring 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2347433 ']' 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2347433 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2347433 ']' 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2347433 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:06.999 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.257 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2347433 00:23:07.257 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:07.257 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:07.257 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2347433' 00:23:07.257 killing process with pid 2347433 00:23:07.257 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2347433 00:23:07.257 11:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2347433 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.635 11:34:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:11.172 00:23:11.172 real 0m24.573s 00:23:11.172 user 1m4.116s 00:23:11.172 sys 0m8.334s 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:11.172 ************************************ 00:23:11.172 END TEST nvmf_perf 00:23:11.172 ************************************ 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.172 ************************************ 00:23:11.172 START TEST nvmf_fio_host 00:23:11.172 ************************************ 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:11.172 * Looking for test storage... 00:23:11.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:11.172 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:11.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.173 --rc genhtml_branch_coverage=1 00:23:11.173 --rc genhtml_function_coverage=1 00:23:11.173 --rc genhtml_legend=1 00:23:11.173 --rc geninfo_all_blocks=1 00:23:11.173 --rc geninfo_unexecuted_blocks=1 00:23:11.173 00:23:11.173 ' 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:11.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.173 --rc genhtml_branch_coverage=1 00:23:11.173 --rc genhtml_function_coverage=1 00:23:11.173 --rc genhtml_legend=1 00:23:11.173 --rc geninfo_all_blocks=1 00:23:11.173 --rc geninfo_unexecuted_blocks=1 00:23:11.173 00:23:11.173 ' 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:11.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.173 --rc genhtml_branch_coverage=1 00:23:11.173 --rc genhtml_function_coverage=1 00:23:11.173 --rc genhtml_legend=1 00:23:11.173 --rc geninfo_all_blocks=1 00:23:11.173 --rc geninfo_unexecuted_blocks=1 00:23:11.173 00:23:11.173 ' 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:11.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.173 --rc genhtml_branch_coverage=1 00:23:11.173 --rc genhtml_function_coverage=1 00:23:11.173 --rc genhtml_legend=1 00:23:11.173 --rc geninfo_all_blocks=1 00:23:11.173 --rc geninfo_unexecuted_blocks=1 00:23:11.173 00:23:11.173 ' 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.173 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.174 11:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.743 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:17.744 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:17.744 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:17.744 Found net devices under 0000:86:00.0: cvl_0_0 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:17.744 Found net devices under 0000:86:00.1: cvl_0_1 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:17.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:23:17.744 00:23:17.744 --- 10.0.0.2 ping statistics --- 00:23:17.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.744 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:23:17.744 00:23:17.744 --- 10.0.0.1 ping statistics --- 00:23:17.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.744 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2353705 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2353705 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2353705 ']' 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.744 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.744 [2024-11-19 11:34:30.659396] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:23:17.745 [2024-11-19 11:34:30.659448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.745 [2024-11-19 11:34:30.738457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.745 [2024-11-19 11:34:30.781861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.745 [2024-11-19 11:34:30.781900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.745 [2024-11-19 11:34:30.781907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.745 [2024-11-19 11:34:30.781913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.745 [2024-11-19 11:34:30.781917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.745 [2024-11-19 11:34:30.783543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.745 [2024-11-19 11:34:30.783652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.745 [2024-11-19 11:34:30.783754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.745 [2024-11-19 11:34:30.783756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.745 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.745 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:17.745 11:34:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:17.745 [2024-11-19 11:34:31.049535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.745 11:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:17.745 11:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:17.745 11:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.745 11:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:17.745 Malloc1 00:23:17.745 11:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.003 11:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:18.003 11:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.261 [2024-11-19 11:34:31.915268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.261 11:34:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:18.570 11:34:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:18.881 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:18.881 fio-3.35 00:23:18.881 Starting 1 thread 00:23:21.432 00:23:21.432 test: (groupid=0, jobs=1): err= 0: pid=2354119: Tue Nov 19 11:34:34 2024 00:23:21.432 read: IOPS=11.5k, BW=45.0MiB/s (47.1MB/s)(90.1MiB/2005msec) 00:23:21.432 slat (nsec): min=1582, max=250819, avg=1741.59, stdev=2249.41 00:23:21.432 clat (usec): min=3127, max=10980, avg=6161.60, stdev=460.78 00:23:21.432 lat (usec): min=3160, max=10982, avg=6163.35, stdev=460.70 00:23:21.432 clat percentiles (usec): 00:23:21.432 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:23:21.432 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6259], 00:23:21.432 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:23:21.432 | 99.00th=[ 7177], 99.50th=[ 7242], 99.90th=[ 8848], 99.95th=[ 9503], 00:23:21.432 | 99.99th=[10945] 00:23:21.432 bw ( KiB/s): min=45384, max=46824, per=99.94%, avg=46012.00, stdev=621.07, samples=4 00:23:21.432 iops : min=11346, max=11706, avg=11503.00, stdev=155.27, samples=4 00:23:21.432 write: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(89.5MiB/2005msec); 0 zone resets 00:23:21.432 slat (nsec): min=1620, max=236183, avg=1800.10, stdev=1724.54 00:23:21.432 clat (usec): min=2431, max=8943, avg=4966.58, stdev=383.27 00:23:21.432 lat (usec): min=2446, max=8945, avg=4968.38, stdev=383.31 00:23:21.432 clat percentiles (usec): 00:23:21.432 | 1.00th=[ 4080], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4686], 00:23:21.432 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5080], 00:23:21.432 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5538], 00:23:21.433 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 7635], 99.95th=[ 8356], 00:23:21.433 | 99.99th=[ 8848] 00:23:21.433 bw ( KiB/s): min=45440, max=46080, per=100.00%, avg=45714.00, stdev=268.24, samples=4 00:23:21.433 iops : min=11360, max=11520, avg=11428.50, stdev=67.06, samples=4 00:23:21.433 lat (msec) : 4=0.33%, 10=99.64%, 20=0.02% 00:23:21.433 cpu : usr=72.21%, sys=26.80%, ctx=68, majf=0, minf=3 00:23:21.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:21.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:21.433 issued rwts: total=23077,22913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:21.433 00:23:21.433 Run status group 0 (all jobs): 00:23:21.433 READ: bw=45.0MiB/s (47.1MB/s), 45.0MiB/s-45.0MiB/s (47.1MB/s-47.1MB/s), io=90.1MiB (94.5MB), run=2005-2005msec 00:23:21.433 WRITE: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=89.5MiB (93.9MB), run=2005-2005msec 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:21.433 11:34:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:21.692 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:21.692 fio-3.35 00:23:21.692 Starting 1 thread 00:23:24.230 00:23:24.230 test: (groupid=0, jobs=1): err= 0: pid=2354693: Tue Nov 19 11:34:37 2024 00:23:24.230 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(339MiB/2006msec) 00:23:24.230 slat (nsec): min=2575, max=87290, avg=2817.09, stdev=1232.78 00:23:24.230 clat (usec): min=1802, max=14547, avg=6802.72, stdev=1599.63 00:23:24.230 lat (usec): min=1805, max=14550, avg=6805.53, stdev=1599.72 00:23:24.230 clat percentiles (usec): 00:23:24.230 | 1.00th=[ 3589], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5407], 00:23:24.230 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7308], 00:23:24.230 | 70.00th=[ 7635], 80.00th=[ 8029], 90.00th=[ 8717], 95.00th=[ 9503], 00:23:24.230 | 99.00th=[11076], 99.50th=[11731], 99.90th=[12518], 99.95th=[12780], 00:23:24.230 | 99.99th=[13042] 00:23:24.230 bw ( KiB/s): min=76832, max=95872, per=50.24%, avg=87024.00, stdev=8008.29, samples=4 00:23:24.230 iops : min= 4802, max= 5992, avg=5439.00, stdev=500.52, samples=4 00:23:24.230 write: IOPS=6391, BW=99.9MiB/s (105MB/s)(178MiB/1781msec); 0 zone resets 00:23:24.230 slat (usec): min=29, max=379, avg=31.70, stdev= 6.70 00:23:24.230 clat (usec): min=3831, max=16203, avg=8821.13, stdev=1488.97 00:23:24.230 lat (usec): min=3862, max=16233, avg=8852.84, stdev=1489.90 00:23:24.230 clat percentiles (usec): 00:23:24.230 | 1.00th=[ 5866], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7570], 00:23:24.230 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:23:24.230 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10945], 95.00th=[11600], 00:23:24.230 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13566], 99.95th=[14222], 00:23:24.230 | 99.99th=[16057] 00:23:24.230 bw ( KiB/s): min=80960, max=99712, per=88.71%, avg=90720.00, stdev=7927.82, samples=4 00:23:24.230 iops : min= 5060, max= 6232, avg=5670.00, stdev=495.49, samples=4 00:23:24.230 lat (msec) : 2=0.02%, 4=1.68%, 10=89.30%, 20=9.00% 00:23:24.230 cpu : usr=87.04%, sys=12.16%, ctx=50, majf=0, minf=3 00:23:24.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:24.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:24.230 issued rwts: total=21717,11384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:24.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:24.230 00:23:24.230 Run status group 0 (all jobs): 00:23:24.230 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=339MiB (356MB), run=2006-2006msec 00:23:24.230 WRITE: bw=99.9MiB/s (105MB/s), 99.9MiB/s-99.9MiB/s (105MB/s-105MB/s), io=178MiB (187MB), run=1781-1781msec 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.230 rmmod nvme_tcp 00:23:24.230 rmmod nvme_fabrics 00:23:24.230 rmmod nvme_keyring 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2353705 ']' 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2353705 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2353705 ']' 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2353705 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2353705 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2353705' 00:23:24.230 killing process with pid 2353705 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2353705 00:23:24.230 11:34:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2353705 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.490 11:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.396 11:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:26.396 00:23:26.396 real 0m15.682s 00:23:26.396 user 0m46.259s 00:23:26.396 sys 0m6.452s 00:23:26.396 11:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.396 11:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.396 ************************************ 00:23:26.396 END TEST nvmf_fio_host 00:23:26.396 ************************************ 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.656 ************************************ 00:23:26.656 START TEST nvmf_failover 00:23:26.656 ************************************ 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:26.656 * Looking for test storage... 00:23:26.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:26.656 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:26.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.657 --rc genhtml_branch_coverage=1 00:23:26.657 --rc genhtml_function_coverage=1 00:23:26.657 --rc genhtml_legend=1 00:23:26.657 --rc geninfo_all_blocks=1 00:23:26.657 --rc geninfo_unexecuted_blocks=1 00:23:26.657 00:23:26.657 ' 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:26.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.657 --rc genhtml_branch_coverage=1 00:23:26.657 --rc genhtml_function_coverage=1 00:23:26.657 --rc genhtml_legend=1 00:23:26.657 --rc geninfo_all_blocks=1 00:23:26.657 --rc geninfo_unexecuted_blocks=1 00:23:26.657 00:23:26.657 ' 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:26.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.657 --rc genhtml_branch_coverage=1 00:23:26.657 --rc genhtml_function_coverage=1 00:23:26.657 --rc genhtml_legend=1 00:23:26.657 --rc geninfo_all_blocks=1 00:23:26.657 --rc geninfo_unexecuted_blocks=1 00:23:26.657 00:23:26.657 ' 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:26.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.657 --rc genhtml_branch_coverage=1 00:23:26.657 --rc genhtml_function_coverage=1 00:23:26.657 --rc genhtml_legend=1 00:23:26.657 --rc geninfo_all_blocks=1 00:23:26.657 --rc geninfo_unexecuted_blocks=1 00:23:26.657 00:23:26.657 ' 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.657 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.917 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:26.917 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:26.917 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.917 11:34:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:33.491 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:33.491 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.491 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:33.492 Found net devices under 0000:86:00.0: cvl_0_0 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:33.492 Found net devices under 0000:86:00.1: cvl_0_1 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:23:33.492 00:23:33.492 --- 10.0.0.2 ping statistics --- 00:23:33.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.492 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:23:33.492 00:23:33.492 --- 10.0.0.1 ping statistics --- 00:23:33.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.492 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2358637 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2358637 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2358637 ']' 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.492 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.492 [2024-11-19 11:34:46.430036] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:23:33.492 [2024-11-19 11:34:46.430087] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.492 [2024-11-19 11:34:46.513275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:33.492 [2024-11-19 11:34:46.555639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.492 [2024-11-19 11:34:46.555675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.493 [2024-11-19 11:34:46.555682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.493 [2024-11-19 11:34:46.555688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.493 [2024-11-19 11:34:46.555694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.493 [2024-11-19 11:34:46.557141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.493 [2024-11-19 11:34:46.557249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.493 [2024-11-19 11:34:46.557250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.493 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.493 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:33.493 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.493 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.493 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.493 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.493 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:33.493 [2024-11-19 11:34:46.858086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.493 11:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:33.493 Malloc0 00:23:33.493 11:34:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:33.752 11:34:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.752 11:34:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.011 [2024-11-19 11:34:47.703556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.011 11:34:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:34.270 [2024-11-19 11:34:47.912135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:34.270 11:34:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:34.530 [2024-11-19 11:34:48.112786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:34.530 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:34.530 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2358921 00:23:34.530 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.530 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2358921 /var/tmp/bdevperf.sock 00:23:34.530 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2358921 ']' 00:23:34.530 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.530 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.530 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.530 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.530 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:34.791 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.791 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:34.791 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:35.051 NVMe0n1 00:23:35.051 11:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:35.621 00:23:35.621 11:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2359150 00:23:35.621 11:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.621 11:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:36.558 11:34:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.819 11:34:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:40.115 11:34:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:40.116 00:23:40.116 11:34:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:40.376 [2024-11-19 11:34:54.010358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 [2024-11-19 11:34:54.010584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2060 is same with the state(6) to be set 00:23:40.376 11:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:43.673 11:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.673 [2024-11-19 11:34:57.227471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.673 11:34:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:44.611 11:34:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:44.872 [2024-11-19 11:34:58.447120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 [2024-11-19 11:34:58.447329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2e30 is same with the state(6) to be set 00:23:44.872 11:34:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2359150 00:23:51.464 { 00:23:51.464 "results": [ 00:23:51.464 { 00:23:51.464 "job": "NVMe0n1", 00:23:51.464 "core_mask": "0x1", 00:23:51.464 "workload": "verify", 00:23:51.464 "status": "finished", 00:23:51.464 "verify_range": { 00:23:51.464 "start": 0, 00:23:51.464 "length": 16384 00:23:51.464 }, 00:23:51.464 "queue_depth": 128, 00:23:51.464 "io_size": 4096, 00:23:51.464 "runtime": 15.005262, 00:23:51.464 "iops": 10942.627992766804, 00:23:51.464 "mibps": 42.74464059674533, 00:23:51.464 "io_failed": 8037, 00:23:51.464 "io_timeout": 0, 00:23:51.464 "avg_latency_us": 11129.424395506416, 00:23:51.464 "min_latency_us": 436.31304347826085, 00:23:51.464 "max_latency_us": 21769.34956521739 00:23:51.464 } 00:23:51.464 ], 00:23:51.464 "core_count": 1 00:23:51.464 } 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2358921 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2358921 ']' 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2358921 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2358921 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2358921' 00:23:51.464 killing process with pid 2358921 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2358921 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2358921 00:23:51.464 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:51.464 [2024-11-19 11:34:48.188589] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:23:51.464 [2024-11-19 11:34:48.188643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358921 ] 00:23:51.464 [2024-11-19 11:34:48.262283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.464 [2024-11-19 11:34:48.304005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.464 Running I/O for 15 seconds... 00:23:51.464 11003.00 IOPS, 42.98 MiB/s [2024-11-19T10:35:05.245Z] [2024-11-19 11:34:50.453123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.464 [2024-11-19 11:34:50.453166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.464 [2024-11-19 11:34:50.453184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.464 [2024-11-19 11:34:50.453192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.464 [2024-11-19 11:34:50.453202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.464 [2024-11-19 11:34:50.453209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.464 [2024-11-19 11:34:50.453217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.464 [2024-11-19 11:34:50.453224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.464 [2024-11-19 11:34:50.453232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.464 [2024-11-19 11:34:50.453239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.464 [2024-11-19 11:34:50.453247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.464 [2024-11-19 11:34:50.453253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.464 [2024-11-19 11:34:50.453261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.464 [2024-11-19 11:34:50.453268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.464 [2024-11-19 11:34:50.453276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.464 [2024-11-19 11:34:50.453283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.464 [2024-11-19 11:34:50.453291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.464 [2024-11-19 11:34:50.453297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.464 [2024-11-19 11:34:50.453305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.464 [2024-11-19 11:34:50.453311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.465 [2024-11-19 11:34:50.453326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.465 [2024-11-19 11:34:50.453347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.465 [2024-11-19 11:34:50.453362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.465 [2024-11-19 11:34:50.453377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.465 [2024-11-19 11:34:50.453392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.465 [2024-11-19 11:34:50.453511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.465 [2024-11-19 11:34:50.453808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.465 [2024-11-19 11:34:50.453816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.453830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.453844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.453858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.453873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.453888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.453907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.453921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.453936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.453958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.453972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.453987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.453994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.466 [2024-11-19 11:34:50.454008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.466 [2024-11-19 11:34:50.454340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.466 [2024-11-19 11:34:50.454348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-11-19 11:34:50.454878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.467 [2024-11-19 11:34:50.454884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.454892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:50.454898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.454906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:50.454912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.454921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:50.454927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.454935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:50.454941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.454952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:50.454959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.454967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:50.454973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.454981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:50.454987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.454997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:50.455004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.455011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:50.455020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.455028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:50.455034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.455043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:50.455049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.455056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafd60 is same with the state(6) to be set 00:23:51.468 [2024-11-19 11:34:50.455065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.468 [2024-11-19 11:34:50.455072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.468 [2024-11-19 11:34:50.455078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98232 len:8 PRP1 0x0 PRP2 0x0 00:23:51.468 [2024-11-19 11:34:50.455085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.455130] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:51.468 [2024-11-19 11:34:50.455154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.468 [2024-11-19 11:34:50.455161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.455169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.468 [2024-11-19 11:34:50.455175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.455182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.468 [2024-11-19 11:34:50.455189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.455195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.468 [2024-11-19 11:34:50.455202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:50.455209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:51.468 [2024-11-19 11:34:50.458070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:51.468 [2024-11-19 11:34:50.458097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8b340 (9): Bad file descriptor 00:23:51.468 [2024-11-19 11:34:50.529710] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:51.468 10603.00 IOPS, 41.42 MiB/s [2024-11-19T10:35:05.249Z] 10813.67 IOPS, 42.24 MiB/s [2024-11-19T10:35:05.249Z] 10895.50 IOPS, 42.56 MiB/s [2024-11-19T10:35:05.249Z] [2024-11-19 11:34:54.011905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.011939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:54.011957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.011970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:54.011979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.011986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:54.011995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.012002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:54.012010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.012016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:54.012025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.012031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:54.012040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.012046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:54.012054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.012061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:54.012069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.012075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:54.012084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.012090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:54.012098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.012105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.468 [2024-11-19 11:34:54.012113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.468 [2024-11-19 11:34:54.012119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.469 [2024-11-19 11:34:54.012416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.469 [2024-11-19 11:34:54.012432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.469 [2024-11-19 11:34:54.012448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.469 [2024-11-19 11:34:54.012462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.469 [2024-11-19 11:34:54.012479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.469 [2024-11-19 11:34:54.012487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.469 [2024-11-19 11:34:54.012494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.012993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.012999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.013009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.470 [2024-11-19 11:34:54.013016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.470 [2024-11-19 11:34:54.013024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.471 [2024-11-19 11:34:54.013031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.471 [2024-11-19 11:34:54.013047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.471 [2024-11-19 11:34:54.013062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.471 [2024-11-19 11:34:54.013076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.471 [2024-11-19 11:34:54.013094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.471 [2024-11-19 11:34:54.013109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.471 [2024-11-19 11:34:54.013126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.471 [2024-11-19 11:34:54.013141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.471 [2024-11-19 11:34:54.013159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42008 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42016 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42024 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42032 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42040 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42048 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42056 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42064 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42072 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42080 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42088 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42096 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42104 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42112 len:8 PRP1 0x0 PRP2 0x0 00:23:51.471 [2024-11-19 11:34:54.013537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.471 [2024-11-19 11:34:54.013544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.471 [2024-11-19 11:34:54.013549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.471 [2024-11-19 11:34:54.013555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42120 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42128 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42136 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42144 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42152 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42160 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42168 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42176 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42184 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42192 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42200 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42208 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42216 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42224 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42232 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42248 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42256 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.013976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.013981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.013987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42264 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.013993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.014000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.014004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.014010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42272 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.014016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.014022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.014027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.024923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42280 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.024938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.024953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.024961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.472 [2024-11-19 11:34:54.024968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42288 len:8 PRP1 0x0 PRP2 0x0 00:23:51.472 [2024-11-19 11:34:54.024976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.472 [2024-11-19 11:34:54.024986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.472 [2024-11-19 11:34:54.024992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.024999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42296 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.473 [2024-11-19 11:34:54.025026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.025034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42304 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.473 [2024-11-19 11:34:54.025058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.025065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42312 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.473 [2024-11-19 11:34:54.025090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.025097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42320 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.473 [2024-11-19 11:34:54.025123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.025130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42328 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.473 [2024-11-19 11:34:54.025154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.025161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42336 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.473 [2024-11-19 11:34:54.025185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.025193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42344 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.473 [2024-11-19 11:34:54.025217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.025224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42352 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.473 [2024-11-19 11:34:54.025248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.025256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42360 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.473 [2024-11-19 11:34:54.025281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.025289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42368 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.473 [2024-11-19 11:34:54.025313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.025321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41608 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.473 [2024-11-19 11:34:54.025345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.473 [2024-11-19 11:34:54.025353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41616 len:8 PRP1 0x0 PRP2 0x0 00:23:51.473 [2024-11-19 11:34:54.025362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025411] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:51.473 [2024-11-19 11:34:54.025441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.473 [2024-11-19 11:34:54.025450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.473 [2024-11-19 11:34:54.025470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.473 [2024-11-19 11:34:54.025488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.473 [2024-11-19 11:34:54.025506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:54.025515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:51.473 [2024-11-19 11:34:54.025553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8b340 (9): Bad file descriptor 00:23:51.473 [2024-11-19 11:34:54.029417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:51.473 [2024-11-19 11:34:54.093420] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:51.473 10739.80 IOPS, 41.95 MiB/s [2024-11-19T10:35:05.254Z] 10822.83 IOPS, 42.28 MiB/s [2024-11-19T10:35:05.254Z] 10878.43 IOPS, 42.49 MiB/s [2024-11-19T10:35:05.254Z] 10898.88 IOPS, 42.57 MiB/s [2024-11-19T10:35:05.254Z] 10933.00 IOPS, 42.71 MiB/s [2024-11-19T10:35:05.254Z] [2024-11-19 11:34:58.449081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.473 [2024-11-19 11:34:58.449117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:58.449132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.473 [2024-11-19 11:34:58.449140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:58.449148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.473 [2024-11-19 11:34:58.449156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:58.449164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.473 [2024-11-19 11:34:58.449171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:58.449179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.473 [2024-11-19 11:34:58.449186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:58.449194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.473 [2024-11-19 11:34:58.449202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:58.449210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.473 [2024-11-19 11:34:58.449217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.473 [2024-11-19 11:34:58.449225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.474 [2024-11-19 11:34:58.449574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.474 [2024-11-19 11:34:58.449589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.474 [2024-11-19 11:34:58.449603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.474 [2024-11-19 11:34:58.449619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.474 [2024-11-19 11:34:58.449635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.474 [2024-11-19 11:34:58.449649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.474 [2024-11-19 11:34:58.449663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.474 [2024-11-19 11:34:58.449678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.474 [2024-11-19 11:34:58.449694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.474 [2024-11-19 11:34:58.449702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.474 [2024-11-19 11:34:58.449708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.449989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.449997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.475 [2024-11-19 11:34:58.450188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.475 [2024-11-19 11:34:58.450195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.476 [2024-11-19 11:34:58.450529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.476 [2024-11-19 11:34:58.450565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61336 len:8 PRP1 0x0 PRP2 0x0 00:23:51.476 [2024-11-19 11:34:58.450572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.476 [2024-11-19 11:34:58.450587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.476 [2024-11-19 11:34:58.450593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61344 len:8 PRP1 0x0 PRP2 0x0 00:23:51.476 [2024-11-19 11:34:58.450599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.476 [2024-11-19 11:34:58.450611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.476 [2024-11-19 11:34:58.450617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61352 len:8 PRP1 0x0 PRP2 0x0 00:23:51.476 [2024-11-19 11:34:58.450625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.476 [2024-11-19 11:34:58.450637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.476 [2024-11-19 11:34:58.450647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61360 len:8 PRP1 0x0 PRP2 0x0 00:23:51.476 [2024-11-19 11:34:58.450654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.476 [2024-11-19 11:34:58.450666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.476 [2024-11-19 11:34:58.450671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61368 len:8 PRP1 0x0 PRP2 0x0 00:23:51.476 [2024-11-19 11:34:58.450678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.476 [2024-11-19 11:34:58.450689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.476 [2024-11-19 11:34:58.450694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61376 len:8 PRP1 0x0 PRP2 0x0 00:23:51.476 [2024-11-19 11:34:58.450701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.476 [2024-11-19 11:34:58.450708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.476 [2024-11-19 11:34:58.450713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.476 [2024-11-19 11:34:58.450719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61384 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.450736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.450742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61392 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.450761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.450767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61400 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.450784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.450790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61408 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.450807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.450813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61416 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.450834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.450839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61424 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.450857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.450862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61432 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.450881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.450886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61440 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.450904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.450909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61448 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.450927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.450933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61456 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.450957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.450962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61464 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.450980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.450986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61472 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.450992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.450999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.451003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.451009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61480 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.451016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.451025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.451030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.451035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61488 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.451041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.451048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.451053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.451058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61496 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.451064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.451071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.451075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.451081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61504 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.451087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.451093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.451098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.451104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61512 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.451110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.451116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.451121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.451126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61520 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.451133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.451139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.451145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.451151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61528 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.451157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.451163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.451168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.477 [2024-11-19 11:34:58.451174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61536 len:8 PRP1 0x0 PRP2 0x0 00:23:51.477 [2024-11-19 11:34:58.451180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.477 [2024-11-19 11:34:58.451186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.477 [2024-11-19 11:34:58.451191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.478 [2024-11-19 11:34:58.451196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61544 len:8 PRP1 0x0 PRP2 0x0 00:23:51.478 [2024-11-19 11:34:58.451206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.478 [2024-11-19 11:34:58.451213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.478 [2024-11-19 11:34:58.451218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.478 [2024-11-19 11:34:58.451223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61552 len:8 PRP1 0x0 PRP2 0x0 00:23:51.478 [2024-11-19 11:34:58.451230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.478 [2024-11-19 11:34:58.451236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.478 [2024-11-19 11:34:58.451241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.478 [2024-11-19 11:34:58.451246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61560 len:8 PRP1 0x0 PRP2 0x0 00:23:51.478 [2024-11-19 11:34:58.451252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.478 [2024-11-19 11:34:58.451259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.478 [2024-11-19 11:34:58.451264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.478 [2024-11-19 11:34:58.451270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61568 len:8 PRP1 0x0 PRP2 0x0 00:23:51.478 [2024-11-19 11:34:58.461812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.478 [2024-11-19 11:34:58.461827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.478 [2024-11-19 11:34:58.461836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.478 [2024-11-19 11:34:58.461843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61576 len:8 PRP1 0x0 PRP2 0x0 00:23:51.478 [2024-11-19 11:34:58.461852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.478 [2024-11-19 11:34:58.461861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:51.478 [2024-11-19 11:34:58.461868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:51.478 [2024-11-19 11:34:58.461875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60808 len:8 PRP1 0x0 PRP2 0x0 00:23:51.478 [2024-11-19 11:34:58.461883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.478 [2024-11-19 11:34:58.461933] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:51.478 [2024-11-19 11:34:58.461968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.478 [2024-11-19 11:34:58.461979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.478 [2024-11-19 11:34:58.461989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.478 [2024-11-19 11:34:58.461998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.478 [2024-11-19 11:34:58.462007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.478 [2024-11-19 11:34:58.462016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.478 [2024-11-19 11:34:58.462026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.478 [2024-11-19 11:34:58.462038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.478 [2024-11-19 11:34:58.462047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:51.478 [2024-11-19 11:34:58.462074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8b340 (9): Bad file descriptor 00:23:51.478 [2024-11-19 11:34:58.465950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:51.478 [2024-11-19 11:34:58.495356] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:51.478 10896.50 IOPS, 42.56 MiB/s [2024-11-19T10:35:05.259Z] 10904.00 IOPS, 42.59 MiB/s [2024-11-19T10:35:05.259Z] 10915.50 IOPS, 42.64 MiB/s [2024-11-19T10:35:05.259Z] 10938.77 IOPS, 42.73 MiB/s [2024-11-19T10:35:05.259Z] 10942.00 IOPS, 42.74 MiB/s [2024-11-19T10:35:05.259Z] 10937.93 IOPS, 42.73 MiB/s 00:23:51.478 Latency(us) 00:23:51.478 [2024-11-19T10:35:05.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.478 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:51.478 Verification LBA range: start 0x0 length 0x4000 00:23:51.478 NVMe0n1 : 15.01 10942.63 42.74 535.61 0.00 11129.42 436.31 21769.35 00:23:51.478 [2024-11-19T10:35:05.259Z] =================================================================================================================== 00:23:51.478 [2024-11-19T10:35:05.259Z] Total : 10942.63 42.74 535.61 0.00 11129.42 436.31 21769.35 00:23:51.478 Received shutdown signal, test time was about 15.000000 seconds 00:23:51.478 00:23:51.478 Latency(us) 00:23:51.478 [2024-11-19T10:35:05.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.478 [2024-11-19T10:35:05.259Z] =================================================================================================================== 00:23:51.478 [2024-11-19T10:35:05.259Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2361527 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2361527 /var/tmp/bdevperf.sock 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2361527 ']' 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:51.478 11:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:51.478 [2024-11-19 11:35:05.073627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:51.478 11:35:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:51.739 [2024-11-19 11:35:05.278227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:51.739 11:35:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:51.999 NVMe0n1 00:23:51.999 11:35:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:52.260 00:23:52.260 11:35:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:52.520 00:23:52.520 11:35:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.520 11:35:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:52.780 11:35:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:52.780 11:35:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:56.079 11:35:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:56.079 11:35:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:56.079 11:35:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.079 11:35:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2362377 00:23:56.079 11:35:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2362377 00:23:57.461 { 00:23:57.461 "results": [ 00:23:57.461 { 00:23:57.461 "job": "NVMe0n1", 00:23:57.461 "core_mask": "0x1", 00:23:57.461 "workload": "verify", 00:23:57.461 "status": "finished", 00:23:57.461 "verify_range": { 00:23:57.461 "start": 0, 00:23:57.461 "length": 16384 00:23:57.461 }, 00:23:57.461 "queue_depth": 128, 00:23:57.461 "io_size": 4096, 00:23:57.461 "runtime": 1.008582, 00:23:57.461 "iops": 10952.009851454815, 00:23:57.461 "mibps": 42.78128848224537, 00:23:57.461 "io_failed": 0, 00:23:57.461 "io_timeout": 0, 00:23:57.461 "avg_latency_us": 11630.429309842635, 00:23:57.461 "min_latency_us": 1823.6104347826088, 00:23:57.461 "max_latency_us": 12309.370434782608 00:23:57.461 } 00:23:57.461 ], 00:23:57.461 "core_count": 1 00:23:57.461 } 00:23:57.461 11:35:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.461 [2024-11-19 11:35:04.688835] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:23:57.461 [2024-11-19 11:35:04.688887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361527 ] 00:23:57.461 [2024-11-19 11:35:04.764633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.461 [2024-11-19 11:35:04.802570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.461 [2024-11-19 11:35:06.515212] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:57.461 [2024-11-19 11:35:06.515255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.461 [2024-11-19 11:35:06.515267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.461 [2024-11-19 11:35:06.515276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.461 [2024-11-19 11:35:06.515283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.461 [2024-11-19 11:35:06.515290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.461 [2024-11-19 11:35:06.515296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.461 [2024-11-19 11:35:06.515303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.461 [2024-11-19 11:35:06.515310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.461 [2024-11-19 11:35:06.515317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:57.461 [2024-11-19 11:35:06.515342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:57.461 [2024-11-19 11:35:06.515357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59b340 (9): Bad file descriptor 00:23:57.461 [2024-11-19 11:35:06.518486] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:57.461 Running I/O for 1 seconds... 00:23:57.461 10892.00 IOPS, 42.55 MiB/s 00:23:57.461 Latency(us) 00:23:57.461 [2024-11-19T10:35:11.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.461 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:57.461 Verification LBA range: start 0x0 length 0x4000 00:23:57.461 NVMe0n1 : 1.01 10952.01 42.78 0.00 0.00 11630.43 1823.61 12309.37 00:23:57.461 [2024-11-19T10:35:11.242Z] =================================================================================================================== 00:23:57.461 [2024-11-19T10:35:11.242Z] Total : 10952.01 42.78 0.00 0.00 11630.43 1823.61 12309.37 00:23:57.462 11:35:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:57.462 11:35:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:57.462 11:35:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:57.722 11:35:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:57.722 11:35:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:57.722 11:35:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:57.982 11:35:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2361527 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2361527 ']' 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2361527 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2361527 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2361527' 00:24:01.281 killing process with pid 2361527 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2361527 00:24:01.281 11:35:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2361527 00:24:01.540 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:01.540 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.540 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:01.540 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.540 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:01.540 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.540 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:01.540 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.540 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:01.540 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.540 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.540 rmmod nvme_tcp 00:24:01.800 rmmod nvme_fabrics 00:24:01.800 rmmod nvme_keyring 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2358637 ']' 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2358637 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2358637 ']' 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2358637 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2358637 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2358637' 00:24:01.800 killing process with pid 2358637 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2358637 00:24:01.800 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2358637 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.059 11:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.971 11:35:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:03.971 00:24:03.971 real 0m37.443s 00:24:03.971 user 1m58.473s 00:24:03.971 sys 0m7.954s 00:24:03.971 11:35:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.971 11:35:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:03.971 ************************************ 00:24:03.971 END TEST nvmf_failover 00:24:03.971 ************************************ 00:24:03.971 11:35:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:03.971 11:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:03.971 11:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.971 11:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.971 ************************************ 00:24:03.971 START TEST nvmf_host_discovery 00:24:03.971 ************************************ 00:24:03.971 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:04.232 * Looking for test storage... 00:24:04.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.232 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:04.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.232 --rc genhtml_branch_coverage=1 00:24:04.232 --rc genhtml_function_coverage=1 00:24:04.232 --rc genhtml_legend=1 00:24:04.232 --rc geninfo_all_blocks=1 00:24:04.232 --rc geninfo_unexecuted_blocks=1 00:24:04.232 00:24:04.232 ' 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:04.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.233 --rc genhtml_branch_coverage=1 00:24:04.233 --rc genhtml_function_coverage=1 00:24:04.233 --rc genhtml_legend=1 00:24:04.233 --rc geninfo_all_blocks=1 00:24:04.233 --rc geninfo_unexecuted_blocks=1 00:24:04.233 00:24:04.233 ' 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:04.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.233 --rc genhtml_branch_coverage=1 00:24:04.233 --rc genhtml_function_coverage=1 00:24:04.233 --rc genhtml_legend=1 00:24:04.233 --rc geninfo_all_blocks=1 00:24:04.233 --rc geninfo_unexecuted_blocks=1 00:24:04.233 00:24:04.233 ' 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:04.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.233 --rc genhtml_branch_coverage=1 00:24:04.233 --rc genhtml_function_coverage=1 00:24:04.233 --rc genhtml_legend=1 00:24:04.233 --rc geninfo_all_blocks=1 00:24:04.233 --rc geninfo_unexecuted_blocks=1 00:24:04.233 00:24:04.233 ' 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:04.233 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:04.234 11:35:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.818 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.818 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:10.818 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:10.818 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:10.819 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:10.819 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:10.819 Found net devices under 0000:86:00.0: cvl_0_0 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:10.819 Found net devices under 0000:86:00.1: cvl_0_1 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:24:10.819 00:24:10.819 --- 10.0.0.2 ping statistics --- 00:24:10.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.819 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:10.819 00:24:10.819 --- 10.0.0.1 ping statistics --- 00:24:10.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.819 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.819 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2366825 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2366825 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2366825 ']' 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.820 11:35:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 [2024-11-19 11:35:23.946711] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:24:10.820 [2024-11-19 11:35:23.946754] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.820 [2024-11-19 11:35:24.027241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.820 [2024-11-19 11:35:24.068548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.820 [2024-11-19 11:35:24.068586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.820 [2024-11-19 11:35:24.068593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.820 [2024-11-19 11:35:24.068600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.820 [2024-11-19 11:35:24.068606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.820 [2024-11-19 11:35:24.069177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 [2024-11-19 11:35:24.208561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 [2024-11-19 11:35:24.220749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 null0 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 null1 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2366848 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2366848 /tmp/host.sock 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2366848 ']' 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:10.820 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 [2024-11-19 11:35:24.298982] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:24:10.820 [2024-11-19 11:35:24.299021] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366848 ] 00:24:10.820 [2024-11-19 11:35:24.373161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.820 [2024-11-19 11:35:24.414292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:10.820 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.082 [2024-11-19 11:35:24.842306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.082 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.343 11:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.343 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.344 11:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.344 11:35:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.344 11:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.344 11:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:11.344 11:35:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:11.914 [2024-11-19 11:35:25.588027] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:11.914 [2024-11-19 11:35:25.588047] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:11.914 [2024-11-19 11:35:25.588059] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:11.914 [2024-11-19 11:35:25.674317] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:12.175 [2024-11-19 11:35:25.897461] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:12.175 [2024-11-19 11:35:25.898165] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x167bdd0:1 started. 00:24:12.175 [2024-11-19 11:35:25.899589] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:12.175 [2024-11-19 11:35:25.899604] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:12.175 [2024-11-19 11:35:25.905665] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x167bdd0 was disconnected and freed. delete nvme_qpair. 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.435 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.696 [2024-11-19 11:35:26.249716] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x167c1a0:1 started. 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.696 [2024-11-19 11:35:26.256504] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x167c1a0 was disconnected and freed. delete nvme_qpair. 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:12.696 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.697 [2024-11-19 11:35:26.350397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:12.697 [2024-11-19 11:35:26.350503] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:12.697 [2024-11-19 11:35:26.350523] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.697 [2024-11-19 11:35:26.436764] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:12.697 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.958 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:12.958 11:35:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:12.958 [2024-11-19 11:35:26.707011] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:12.958 [2024-11-19 11:35:26.707044] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:12.958 [2024-11-19 11:35:26.707053] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:12.958 [2024-11-19 11:35:26.707057] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.901 [2024-11-19 11:35:27.610539] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:13.901 [2024-11-19 11:35:27.610559] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:13.901 [2024-11-19 11:35:27.611803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.901 [2024-11-19 11:35:27.611820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.901 [2024-11-19 11:35:27.611828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.901 [2024-11-19 11:35:27.611835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.901 [2024-11-19 11:35:27.611842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.901 [2024-11-19 11:35:27.611849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.901 [2024-11-19 11:35:27.611860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.901 [2024-11-19 11:35:27.611867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.901 [2024-11-19 11:35:27.611873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164c390 is same with the state(6) to be set 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:13.901 [2024-11-19 11:35:27.621817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164c390 (9): Bad file descriptor 00:24:13.901 [2024-11-19 11:35:27.631852] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:13.901 [2024-11-19 11:35:27.631868] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:13.901 [2024-11-19 11:35:27.631873] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:13.901 [2024-11-19 11:35:27.631878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.901 [2024-11-19 11:35:27.631895] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.901 [2024-11-19 11:35:27.632182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.901 [2024-11-19 11:35:27.632197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164c390 with addr=10.0.0.2, port=4420 00:24:13.901 [2024-11-19 11:35:27.632206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164c390 is same with the state(6) to be set 00:24:13.901 [2024-11-19 11:35:27.632217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164c390 (9): Bad file descriptor 00:24:13.901 [2024-11-19 11:35:27.632227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.901 [2024-11-19 11:35:27.632233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.901 [2024-11-19 11:35:27.632241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.901 [2024-11-19 11:35:27.632247] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.901 [2024-11-19 11:35:27.632252] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.901 [2024-11-19 11:35:27.632256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:13.901 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.901 [2024-11-19 11:35:27.641927] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:13.901 [2024-11-19 11:35:27.641938] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:13.901 [2024-11-19 11:35:27.641942] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:13.901 [2024-11-19 11:35:27.641951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.901 [2024-11-19 11:35:27.641964] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.901 [2024-11-19 11:35:27.642120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.901 [2024-11-19 11:35:27.642132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164c390 with addr=10.0.0.2, port=4420 00:24:13.901 [2024-11-19 11:35:27.642139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164c390 is same with the state(6) to be set 00:24:13.901 [2024-11-19 11:35:27.642149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164c390 (9): Bad file descriptor 00:24:13.901 [2024-11-19 11:35:27.642159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.901 [2024-11-19 11:35:27.642165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.901 [2024-11-19 11:35:27.642172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.901 [2024-11-19 11:35:27.642178] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.901 [2024-11-19 11:35:27.642182] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.901 [2024-11-19 11:35:27.642186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:13.902 [2024-11-19 11:35:27.651996] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:13.902 [2024-11-19 11:35:27.652007] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:13.902 [2024-11-19 11:35:27.652011] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:13.902 [2024-11-19 11:35:27.652015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.902 [2024-11-19 11:35:27.652027] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.902 [2024-11-19 11:35:27.652175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.902 [2024-11-19 11:35:27.652185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164c390 with addr=10.0.0.2, port=4420 00:24:13.902 [2024-11-19 11:35:27.652192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164c390 is same with the state(6) to be set 00:24:13.902 [2024-11-19 11:35:27.652202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164c390 (9): Bad file descriptor 00:24:13.902 [2024-11-19 11:35:27.652211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.902 [2024-11-19 11:35:27.652218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.902 [2024-11-19 11:35:27.652224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.902 [2024-11-19 11:35:27.652230] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.902 [2024-11-19 11:35:27.652238] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.902 [2024-11-19 11:35:27.652242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:13.902 [2024-11-19 11:35:27.662059] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:13.902 [2024-11-19 11:35:27.662072] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:13.902 [2024-11-19 11:35:27.662076] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:13.902 [2024-11-19 11:35:27.662080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.902 [2024-11-19 11:35:27.662094] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.902 [2024-11-19 11:35:27.662194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.902 [2024-11-19 11:35:27.662207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164c390 with addr=10.0.0.2, port=4420 00:24:13.902 [2024-11-19 11:35:27.662214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164c390 is same with the state(6) to be set 00:24:13.902 [2024-11-19 11:35:27.662224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164c390 (9): Bad file descriptor 00:24:13.902 [2024-11-19 11:35:27.662234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.902 [2024-11-19 11:35:27.662240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.902 [2024-11-19 11:35:27.662247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.902 [2024-11-19 11:35:27.662252] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.902 [2024-11-19 11:35:27.662257] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.902 [2024-11-19 11:35:27.662261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.902 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.902 [2024-11-19 11:35:27.672124] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:13.902 [2024-11-19 11:35:27.672140] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:13.902 [2024-11-19 11:35:27.672144] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:13.902 [2024-11-19 11:35:27.672148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.902 [2024-11-19 11:35:27.672161] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:13.902 [2024-11-19 11:35:27.672330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.902 [2024-11-19 11:35:27.672342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164c390 with addr=10.0.0.2, port=4420 00:24:13.902 [2024-11-19 11:35:27.672348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164c390 is same with the state(6) to be set 00:24:13.902 [2024-11-19 11:35:27.672358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164c390 (9): Bad file descriptor 00:24:13.902 [2024-11-19 11:35:27.672367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:13.902 [2024-11-19 11:35:27.672373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:13.902 [2024-11-19 11:35:27.672380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:13.902 [2024-11-19 11:35:27.672386] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:13.902 [2024-11-19 11:35:27.672390] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:13.902 [2024-11-19 11:35:27.672394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:14.164 [2024-11-19 11:35:27.682192] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:14.164 [2024-11-19 11:35:27.682207] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:14.164 [2024-11-19 11:35:27.682212] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:14.164 [2024-11-19 11:35:27.682216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:14.164 [2024-11-19 11:35:27.682230] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:14.164 [2024-11-19 11:35:27.682415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.164 [2024-11-19 11:35:27.682427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164c390 with addr=10.0.0.2, port=4420 00:24:14.164 [2024-11-19 11:35:27.682434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164c390 is same with the state(6) to be set 00:24:14.164 [2024-11-19 11:35:27.682445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164c390 (9): Bad file descriptor 00:24:14.164 [2024-11-19 11:35:27.682454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:14.164 [2024-11-19 11:35:27.682460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:14.164 [2024-11-19 11:35:27.682467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:14.164 [2024-11-19 11:35:27.682473] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:14.164 [2024-11-19 11:35:27.682477] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:14.164 [2024-11-19 11:35:27.682481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:14.164 [2024-11-19 11:35:27.692262] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:14.164 [2024-11-19 11:35:27.692273] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:14.164 [2024-11-19 11:35:27.692277] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:14.164 [2024-11-19 11:35:27.692281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:14.164 [2024-11-19 11:35:27.692293] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:14.164 [2024-11-19 11:35:27.692444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.164 [2024-11-19 11:35:27.692455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164c390 with addr=10.0.0.2, port=4420 00:24:14.164 [2024-11-19 11:35:27.692462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164c390 is same with the state(6) to be set 00:24:14.164 [2024-11-19 11:35:27.692471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164c390 (9): Bad file descriptor 00:24:14.164 [2024-11-19 11:35:27.692481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:14.164 [2024-11-19 11:35:27.692487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:14.164 [2024-11-19 11:35:27.692493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:14.164 [2024-11-19 11:35:27.692499] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:14.164 [2024-11-19 11:35:27.692503] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:14.164 [2024-11-19 11:35:27.692507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:14.164 [2024-11-19 11:35:27.696817] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:14.164 [2024-11-19 11:35:27.696833] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.164 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.165 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.426 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.426 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:14.426 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:14.426 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:14.426 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:14.426 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:14.426 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.426 11:35:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.366 [2024-11-19 11:35:28.984976] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:15.366 [2024-11-19 11:35:28.984992] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:15.366 [2024-11-19 11:35:28.985002] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:15.366 [2024-11-19 11:35:29.072264] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:15.632 [2024-11-19 11:35:29.292304] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:15.633 [2024-11-19 11:35:29.292921] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x16821b0:1 started. 00:24:15.633 [2024-11-19 11:35:29.294510] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:15.633 [2024-11-19 11:35:29.294534] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.633 [2024-11-19 11:35:29.295765] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x16821b0 was disconnected and freed. delete nvme_qpair. 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.633 request: 00:24:15.633 { 00:24:15.633 "name": "nvme", 00:24:15.633 "trtype": "tcp", 00:24:15.633 "traddr": "10.0.0.2", 00:24:15.633 "adrfam": "ipv4", 00:24:15.633 "trsvcid": "8009", 00:24:15.633 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:15.633 "wait_for_attach": true, 00:24:15.633 "method": "bdev_nvme_start_discovery", 00:24:15.633 "req_id": 1 00:24:15.633 } 00:24:15.633 Got JSON-RPC error response 00:24:15.633 response: 00:24:15.633 { 00:24:15.633 "code": -17, 00:24:15.633 "message": "File exists" 00:24:15.633 } 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:15.633 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.906 request: 00:24:15.906 { 00:24:15.906 "name": "nvme_second", 00:24:15.906 "trtype": "tcp", 00:24:15.906 "traddr": "10.0.0.2", 00:24:15.906 "adrfam": "ipv4", 00:24:15.906 "trsvcid": "8009", 00:24:15.906 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:15.906 "wait_for_attach": true, 00:24:15.906 "method": "bdev_nvme_start_discovery", 00:24:15.906 "req_id": 1 00:24:15.906 } 00:24:15.906 Got JSON-RPC error response 00:24:15.906 response: 00:24:15.906 { 00:24:15.906 "code": -17, 00:24:15.906 "message": "File exists" 00:24:15.906 } 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:15.906 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.907 11:35:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.902 [2024-11-19 11:35:30.521828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.902 [2024-11-19 11:35:30.521862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1649b60 with addr=10.0.0.2, port=8010 00:24:16.902 [2024-11-19 11:35:30.521880] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:16.902 [2024-11-19 11:35:30.521888] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:16.902 [2024-11-19 11:35:30.521894] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:17.842 [2024-11-19 11:35:31.524375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.842 [2024-11-19 11:35:31.524400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1649b60 with addr=10.0.0.2, port=8010 00:24:17.842 [2024-11-19 11:35:31.524411] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:17.842 [2024-11-19 11:35:31.524417] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:17.842 [2024-11-19 11:35:31.524423] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:18.781 [2024-11-19 11:35:32.526564] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:18.781 request: 00:24:18.781 { 00:24:18.781 "name": "nvme_second", 00:24:18.781 "trtype": "tcp", 00:24:18.781 "traddr": "10.0.0.2", 00:24:18.781 "adrfam": "ipv4", 00:24:18.781 "trsvcid": "8010", 00:24:18.781 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:18.781 "wait_for_attach": false, 00:24:18.781 "attach_timeout_ms": 3000, 00:24:18.781 "method": "bdev_nvme_start_discovery", 00:24:18.781 "req_id": 1 00:24:18.781 } 00:24:18.781 Got JSON-RPC error response 00:24:18.781 response: 00:24:18.781 { 00:24:18.781 "code": -110, 00:24:18.781 "message": "Connection timed out" 00:24:18.781 } 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:18.781 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2366848 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:19.042 rmmod nvme_tcp 00:24:19.042 rmmod nvme_fabrics 00:24:19.042 rmmod nvme_keyring 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2366825 ']' 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2366825 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2366825 ']' 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2366825 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2366825 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2366825' 00:24:19.042 killing process with pid 2366825 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2366825 00:24:19.042 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2366825 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.302 11:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.213 11:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:21.213 00:24:21.213 real 0m17.210s 00:24:21.213 user 0m20.582s 00:24:21.213 sys 0m5.756s 00:24:21.213 11:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.213 11:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.213 ************************************ 00:24:21.213 END TEST nvmf_host_discovery 00:24:21.213 ************************************ 00:24:21.213 11:35:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:21.213 11:35:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:21.213 11:35:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:21.213 11:35:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.474 ************************************ 00:24:21.474 START TEST nvmf_host_multipath_status 00:24:21.474 ************************************ 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:21.474 * Looking for test storage... 00:24:21.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:21.474 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:21.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.475 --rc genhtml_branch_coverage=1 00:24:21.475 --rc genhtml_function_coverage=1 00:24:21.475 --rc genhtml_legend=1 00:24:21.475 --rc geninfo_all_blocks=1 00:24:21.475 --rc geninfo_unexecuted_blocks=1 00:24:21.475 00:24:21.475 ' 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:21.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.475 --rc genhtml_branch_coverage=1 00:24:21.475 --rc genhtml_function_coverage=1 00:24:21.475 --rc genhtml_legend=1 00:24:21.475 --rc geninfo_all_blocks=1 00:24:21.475 --rc geninfo_unexecuted_blocks=1 00:24:21.475 00:24:21.475 ' 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:21.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.475 --rc genhtml_branch_coverage=1 00:24:21.475 --rc genhtml_function_coverage=1 00:24:21.475 --rc genhtml_legend=1 00:24:21.475 --rc geninfo_all_blocks=1 00:24:21.475 --rc geninfo_unexecuted_blocks=1 00:24:21.475 00:24:21.475 ' 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:21.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.475 --rc genhtml_branch_coverage=1 00:24:21.475 --rc genhtml_function_coverage=1 00:24:21.475 --rc genhtml_legend=1 00:24:21.475 --rc geninfo_all_blocks=1 00:24:21.475 --rc geninfo_unexecuted_blocks=1 00:24:21.475 00:24:21.475 ' 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:21.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:21.475 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:21.476 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.476 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:21.476 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:21.476 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:21.476 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.476 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.476 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.476 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:21.476 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:21.476 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:21.476 11:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:28.077 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:28.077 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:28.077 Found net devices under 0000:86:00.0: cvl_0_0 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:28.077 Found net devices under 0000:86:00.1: cvl_0_1 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.077 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.078 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.078 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.078 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.078 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.078 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.078 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.078 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.078 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.078 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.078 11:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:24:28.078 00:24:28.078 --- 10.0.0.2 ping statistics --- 00:24:28.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.078 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:24:28.078 00:24:28.078 --- 10.0.0.1 ping statistics --- 00:24:28.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.078 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2371925 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2371925 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2371925 ']' 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:28.078 [2024-11-19 11:35:41.176299] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:24:28.078 [2024-11-19 11:35:41.176348] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.078 [2024-11-19 11:35:41.255787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:28.078 [2024-11-19 11:35:41.297483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.078 [2024-11-19 11:35:41.297521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.078 [2024-11-19 11:35:41.297528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.078 [2024-11-19 11:35:41.297534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.078 [2024-11-19 11:35:41.297539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.078 [2024-11-19 11:35:41.298791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.078 [2024-11-19 11:35:41.298794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2371925 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:28.078 [2024-11-19 11:35:41.599529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:28.078 Malloc0 00:24:28.078 11:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:28.338 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:28.597 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.857 [2024-11-19 11:35:42.431829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.857 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:28.857 [2024-11-19 11:35:42.620292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:29.118 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2372179 00:24:29.118 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:29.118 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:29.118 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2372179 /var/tmp/bdevperf.sock 00:24:29.118 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2372179 ']' 00:24:29.118 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.118 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.118 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.118 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.118 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:29.377 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.377 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:29.377 11:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:29.377 11:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:29.636 Nvme0n1 00:24:29.896 11:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:30.156 Nvme0n1 00:24:30.156 11:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:30.156 11:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:32.696 11:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:32.696 11:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:32.696 11:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:32.696 11:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:33.635 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:33.635 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:33.635 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.635 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:33.894 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.894 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:33.894 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.894 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:34.153 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:34.153 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:34.153 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.153 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:34.412 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.412 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:34.412 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.412 11:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:34.412 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.412 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:34.412 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.412 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:34.670 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.670 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:34.670 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.670 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:34.930 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.930 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:34.930 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:35.188 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:35.446 11:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:36.383 11:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:36.383 11:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:36.383 11:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.383 11:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:36.641 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:36.641 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:36.641 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.641 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:36.641 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.641 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:36.641 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.641 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:36.901 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.901 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:36.901 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.901 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:37.160 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.160 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:37.160 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.160 11:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:37.419 11:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.419 11:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:37.419 11:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.419 11:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:37.678 11:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.678 11:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:37.678 11:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:37.678 11:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:37.938 11:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:39.315 11:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:39.315 11:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:39.315 11:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.315 11:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:39.315 11:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.315 11:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:39.315 11:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:39.315 11:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.574 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:39.574 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:39.574 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.574 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:39.574 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.574 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:39.574 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.574 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:39.833 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.833 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:39.833 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.833 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:40.092 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.092 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:40.092 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.092 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:40.350 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.350 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:40.350 11:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:40.350 11:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:40.609 11:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.987 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:42.246 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.246 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:42.246 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.246 11:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:42.504 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.504 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:42.504 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.504 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:42.763 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.763 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:42.763 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.763 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:43.021 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.021 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:43.021 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:43.021 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:43.280 11:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:44.220 11:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:44.220 11:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:44.220 11:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.220 11:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:44.479 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.479 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:44.480 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.480 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.739 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.739 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.739 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.739 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:44.997 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.997 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:44.997 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.997 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:44.998 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.998 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:44.998 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.998 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.256 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:45.256 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:45.256 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.256 11:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.515 11:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:45.515 11:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:45.515 11:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:45.774 11:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:46.032 11:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:46.967 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:46.967 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:46.967 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.967 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:47.226 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:47.226 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:47.226 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.226 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:47.226 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.226 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:47.226 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.226 11:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:47.485 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.485 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:47.485 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.485 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:47.744 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.744 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:47.744 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:47.744 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.003 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.003 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:48.003 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.003 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:48.261 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.261 11:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:48.261 11:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:48.261 11:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:48.518 11:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:48.775 11:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:49.712 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:49.712 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:49.712 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.712 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:49.971 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.971 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:49.971 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.971 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:50.230 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.230 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:50.230 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.230 11:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:50.489 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.489 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:50.489 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.489 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:50.748 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.748 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:50.748 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.748 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:50.748 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.748 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:50.748 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.748 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:51.007 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.007 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:51.007 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:51.265 11:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:51.524 11:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:52.462 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:52.462 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:52.462 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.462 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:52.720 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.720 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:52.721 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.721 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:52.721 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.721 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:52.721 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:52.721 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.980 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.980 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:52.980 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.980 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:53.238 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.238 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:53.239 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.239 11:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:53.498 11:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.498 11:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:53.498 11:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.498 11:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:53.757 11:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.757 11:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:53.757 11:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:53.757 11:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:54.016 11:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:54.954 11:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:54.954 11:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:54.954 11:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.954 11:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:55.212 11:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.212 11:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:55.212 11:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.212 11:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:55.471 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.471 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:55.471 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.471 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:55.729 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.729 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:55.729 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.729 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:55.729 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.729 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:55.729 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.729 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:55.987 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.987 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:55.987 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.987 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:56.245 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.245 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:56.245 11:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:56.504 11:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:56.762 11:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:57.701 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:57.701 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:57.701 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.701 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:57.959 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.959 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:57.959 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.959 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:58.218 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.218 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:58.219 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.219 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:58.478 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.478 11:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:58.478 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:58.478 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.478 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.478 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:58.478 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.478 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:58.737 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.737 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:58.737 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.737 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2372179 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2372179 ']' 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2372179 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2372179 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2372179' 00:24:58.997 killing process with pid 2372179 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2372179 00:24:58.997 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2372179 00:24:58.997 { 00:24:58.997 "results": [ 00:24:58.997 { 00:24:58.997 "job": "Nvme0n1", 00:24:58.997 "core_mask": "0x4", 00:24:58.997 "workload": "verify", 00:24:58.997 "status": "terminated", 00:24:58.997 "verify_range": { 00:24:58.997 "start": 0, 00:24:58.997 "length": 16384 00:24:58.997 }, 00:24:58.997 "queue_depth": 128, 00:24:58.997 "io_size": 4096, 00:24:58.997 "runtime": 28.691459, 00:24:58.997 "iops": 10411.635044422104, 00:24:58.997 "mibps": 40.670449392273845, 00:24:58.997 "io_failed": 0, 00:24:58.997 "io_timeout": 0, 00:24:58.997 "avg_latency_us": 12274.023466189275, 00:24:58.997 "min_latency_us": 537.8226086956522, 00:24:58.997 "max_latency_us": 3019898.88 00:24:58.997 } 00:24:58.997 ], 00:24:58.997 "core_count": 1 00:24:58.997 } 00:24:59.261 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2372179 00:24:59.261 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.261 [2024-11-19 11:35:42.693019] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:24:59.261 [2024-11-19 11:35:42.693072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372179 ] 00:24:59.261 [2024-11-19 11:35:42.771262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.261 [2024-11-19 11:35:42.812303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.261 Running I/O for 90 seconds... 00:24:59.261 11245.00 IOPS, 43.93 MiB/s [2024-11-19T10:36:13.042Z] 11297.50 IOPS, 44.13 MiB/s [2024-11-19T10:36:13.042Z] 11314.67 IOPS, 44.20 MiB/s [2024-11-19T10:36:13.042Z] 11318.75 IOPS, 44.21 MiB/s [2024-11-19T10:36:13.042Z] 11348.80 IOPS, 44.33 MiB/s [2024-11-19T10:36:13.042Z] 11353.33 IOPS, 44.35 MiB/s [2024-11-19T10:36:13.042Z] 11312.57 IOPS, 44.19 MiB/s [2024-11-19T10:36:13.042Z] 11315.50 IOPS, 44.20 MiB/s [2024-11-19T10:36:13.042Z] 11327.00 IOPS, 44.25 MiB/s [2024-11-19T10:36:13.042Z] 11326.10 IOPS, 44.24 MiB/s [2024-11-19T10:36:13.042Z] 11311.73 IOPS, 44.19 MiB/s [2024-11-19T10:36:13.042Z] 11309.33 IOPS, 44.18 MiB/s [2024-11-19T10:36:13.042Z] [2024-11-19 11:35:56.755657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.755696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.755749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.755758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.755771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.755779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.755792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.755799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.755812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.755819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.755831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.755838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.755850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.755857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.755869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.755876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.755915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.755923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.261 [2024-11-19 11:35:56.756815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:59.261 [2024-11-19 11:35:56.756830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.756838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.756852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.756858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.756872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.756879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.756893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.756900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.756913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.756920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.756934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.756941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.756960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.756969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.756985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.756993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.262 [2024-11-19 11:35:56.757603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.262 [2024-11-19 11:35:56.757610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.757980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.757996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.263 [2024-11-19 11:35:56.758452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.263 [2024-11-19 11:35:56.758544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:59.263 [2024-11-19 11:35:56.758560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.758974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.758981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.759000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.759006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.759024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.759031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.759050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:35:56.759059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.759077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.264 [2024-11-19 11:35:56.759084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.759102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.264 [2024-11-19 11:35:56.759108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.759127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.264 [2024-11-19 11:35:56.759133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.759152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.264 [2024-11-19 11:35:56.759158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.759176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.264 [2024-11-19 11:35:56.759183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.759201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.264 [2024-11-19 11:35:56.759208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:35:56.759227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.264 [2024-11-19 11:35:56.759233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.264 11114.15 IOPS, 43.41 MiB/s [2024-11-19T10:36:13.045Z] 10320.29 IOPS, 40.31 MiB/s [2024-11-19T10:36:13.045Z] 9632.27 IOPS, 37.63 MiB/s [2024-11-19T10:36:13.045Z] 9191.50 IOPS, 35.90 MiB/s [2024-11-19T10:36:13.045Z] 9304.00 IOPS, 36.34 MiB/s [2024-11-19T10:36:13.045Z] 9397.28 IOPS, 36.71 MiB/s [2024-11-19T10:36:13.045Z] 9563.68 IOPS, 37.36 MiB/s [2024-11-19T10:36:13.045Z] 9742.05 IOPS, 38.05 MiB/s [2024-11-19T10:36:13.045Z] 9898.00 IOPS, 38.66 MiB/s [2024-11-19T10:36:13.045Z] 9950.45 IOPS, 38.87 MiB/s [2024-11-19T10:36:13.045Z] 10003.09 IOPS, 39.07 MiB/s [2024-11-19T10:36:13.045Z] 10075.62 IOPS, 39.36 MiB/s [2024-11-19T10:36:13.045Z] 10196.00 IOPS, 39.83 MiB/s [2024-11-19T10:36:13.045Z] 10306.35 IOPS, 40.26 MiB/s [2024-11-19T10:36:13.045Z] [2024-11-19 11:36:10.335281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:36:10.335321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:36:10.335355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:36:10.335364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:36:10.335378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:36:10.335385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:36:10.335402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:36:10.335410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:36:10.335422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:36:10.335430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:36:10.335442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:36:10.335450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:36:10.335462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:36:10.335469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:36:10.335482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:36:10.335489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:36:10.335501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:36:10.335508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:36:10.335521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:36:10.335528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:59.264 [2024-11-19 11:36:10.335541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.264 [2024-11-19 11:36:10.335549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.335981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.335994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.336001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.336020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.336040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.336062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.265 [2024-11-19 11:36:10.336084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.265 [2024-11-19 11:36:10.336105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.336125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.265 [2024-11-19 11:36:10.336752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.265 [2024-11-19 11:36:10.336774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.265 [2024-11-19 11:36:10.336794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.336814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.336834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.336854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.336875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.265 [2024-11-19 11:36:10.336895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.265 [2024-11-19 11:36:10.336914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:59.265 [2024-11-19 11:36:10.336929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.265 [2024-11-19 11:36:10.336936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.336957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.336964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.336977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.336984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.336996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.266 [2024-11-19 11:36:10.337380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:59.266 [2024-11-19 11:36:10.337393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.266 [2024-11-19 11:36:10.337400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:59.266 10368.41 IOPS, 40.50 MiB/s [2024-11-19T10:36:13.047Z] 10399.18 IOPS, 40.62 MiB/s [2024-11-19T10:36:13.047Z] Received shutdown signal, test time was about 28.692130 seconds 00:24:59.266 00:24:59.266 Latency(us) 00:24:59.266 [2024-11-19T10:36:13.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.266 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:59.266 Verification LBA range: start 0x0 length 0x4000 00:24:59.266 Nvme0n1 : 28.69 10411.64 40.67 0.00 0.00 12274.02 537.82 3019898.88 00:24:59.266 [2024-11-19T10:36:13.047Z] =================================================================================================================== 00:24:59.266 [2024-11-19T10:36:13.047Z] Total : 10411.64 40.67 0.00 0.00 12274.02 537.82 3019898.88 00:24:59.266 11:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.526 rmmod nvme_tcp 00:24:59.526 rmmod nvme_fabrics 00:24:59.526 rmmod nvme_keyring 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2371925 ']' 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2371925 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2371925 ']' 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2371925 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2371925 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2371925' 00:24:59.526 killing process with pid 2371925 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2371925 00:24:59.526 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2371925 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.786 11:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.790 11:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:01.790 00:25:01.790 real 0m40.421s 00:25:01.790 user 1m49.573s 00:25:01.790 sys 0m11.443s 00:25:01.790 11:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.790 11:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:01.790 ************************************ 00:25:01.790 END TEST nvmf_host_multipath_status 00:25:01.790 ************************************ 00:25:01.790 11:36:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:01.790 11:36:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:01.790 11:36:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.790 11:36:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.790 ************************************ 00:25:01.790 START TEST nvmf_discovery_remove_ifc 00:25:01.790 ************************************ 00:25:01.790 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:02.075 * Looking for test storage... 00:25:02.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.075 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:02.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.076 --rc genhtml_branch_coverage=1 00:25:02.076 --rc genhtml_function_coverage=1 00:25:02.076 --rc genhtml_legend=1 00:25:02.076 --rc geninfo_all_blocks=1 00:25:02.076 --rc geninfo_unexecuted_blocks=1 00:25:02.076 00:25:02.076 ' 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:02.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.076 --rc genhtml_branch_coverage=1 00:25:02.076 --rc genhtml_function_coverage=1 00:25:02.076 --rc genhtml_legend=1 00:25:02.076 --rc geninfo_all_blocks=1 00:25:02.076 --rc geninfo_unexecuted_blocks=1 00:25:02.076 00:25:02.076 ' 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:02.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.076 --rc genhtml_branch_coverage=1 00:25:02.076 --rc genhtml_function_coverage=1 00:25:02.076 --rc genhtml_legend=1 00:25:02.076 --rc geninfo_all_blocks=1 00:25:02.076 --rc geninfo_unexecuted_blocks=1 00:25:02.076 00:25:02.076 ' 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:02.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.076 --rc genhtml_branch_coverage=1 00:25:02.076 --rc genhtml_function_coverage=1 00:25:02.076 --rc genhtml_legend=1 00:25:02.076 --rc geninfo_all_blocks=1 00:25:02.076 --rc geninfo_unexecuted_blocks=1 00:25:02.076 00:25:02.076 ' 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.076 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:02.077 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:02.077 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:02.077 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.077 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.077 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.077 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:02.077 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:02.077 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.077 11:36:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:08.646 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:08.646 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:08.646 Found net devices under 0000:86:00.0: cvl_0_0 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:08.646 Found net devices under 0000:86:00.1: cvl_0_1 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:25:08.646 00:25:08.646 --- 10.0.0.2 ping statistics --- 00:25:08.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.646 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:25:08.646 00:25:08.646 --- 10.0.0.1 ping statistics --- 00:25:08.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.646 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.646 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2381311 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2381311 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2381311 ']' 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.647 [2024-11-19 11:36:21.699577] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:25:08.647 [2024-11-19 11:36:21.699621] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.647 [2024-11-19 11:36:21.780301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.647 [2024-11-19 11:36:21.822077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.647 [2024-11-19 11:36:21.822116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.647 [2024-11-19 11:36:21.822126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.647 [2024-11-19 11:36:21.822133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.647 [2024-11-19 11:36:21.822140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.647 [2024-11-19 11:36:21.822672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.647 11:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.647 [2024-11-19 11:36:21.974658] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.647 [2024-11-19 11:36:21.982836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:08.647 null0 00:25:08.647 [2024-11-19 11:36:22.014814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2381473 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2381473 /tmp/host.sock 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2381473 ']' 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:08.647 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.647 [2024-11-19 11:36:22.085335] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:25:08.647 [2024-11-19 11:36:22.085377] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381473 ] 00:25:08.647 [2024-11-19 11:36:22.158389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.647 [2024-11-19 11:36:22.202419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.647 11:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.583 [2024-11-19 11:36:23.341079] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:09.583 [2024-11-19 11:36:23.341100] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:09.583 [2024-11-19 11:36:23.341115] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.840 [2024-11-19 11:36:23.469510] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:09.840 [2024-11-19 11:36:23.571820] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:09.840 [2024-11-19 11:36:23.572627] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xc369f0:1 started. 00:25:09.840 [2024-11-19 11:36:23.574005] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:09.840 [2024-11-19 11:36:23.574046] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:09.840 [2024-11-19 11:36:23.574064] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:09.840 [2024-11-19 11:36:23.574078] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:09.840 [2024-11-19 11:36:23.574097] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:09.840 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.840 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:09.840 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.840 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.840 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.840 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.840 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.840 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.840 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.840 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.098 [2024-11-19 11:36:23.620525] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xc369f0 was disconnected and freed. delete nvme_qpair. 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:10.098 11:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:11.033 11:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:11.033 11:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.033 11:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:11.033 11:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.033 11:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:11.033 11:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.033 11:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:11.033 11:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.292 11:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:11.292 11:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:12.228 11:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:12.228 11:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.228 11:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:12.228 11:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.228 11:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:12.228 11:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.228 11:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:12.228 11:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.228 11:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:12.228 11:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:13.165 11:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:13.165 11:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.165 11:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:13.165 11:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.165 11:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:13.165 11:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.165 11:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:13.165 11:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.165 11:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:13.165 11:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:14.542 11:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.542 11:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.542 11:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.542 11:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.542 11:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.542 11:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.542 11:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.542 11:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.542 11:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:14.542 11:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:15.478 11:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:15.478 11:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.478 11:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:15.478 11:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.478 11:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:15.478 11:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.478 11:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:15.478 11:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.478 [2024-11-19 11:36:29.015557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:15.478 [2024-11-19 11:36:29.015595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.478 [2024-11-19 11:36:29.015607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.478 [2024-11-19 11:36:29.015616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.478 [2024-11-19 11:36:29.015623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.478 [2024-11-19 11:36:29.015630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.478 [2024-11-19 11:36:29.015637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.478 [2024-11-19 11:36:29.015644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.478 [2024-11-19 11:36:29.015651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.478 [2024-11-19 11:36:29.015659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.478 [2024-11-19 11:36:29.015666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.478 [2024-11-19 11:36:29.015672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13220 is same with the state(6) to be set 00:25:15.478 11:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:15.478 11:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:15.478 [2024-11-19 11:36:29.025580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc13220 (9): Bad file descriptor 00:25:15.478 [2024-11-19 11:36:29.035616] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:15.478 [2024-11-19 11:36:29.035627] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:15.478 [2024-11-19 11:36:29.035632] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:15.478 [2024-11-19 11:36:29.035636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:15.478 [2024-11-19 11:36:29.035655] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:16.414 11:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:16.414 11:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.414 11:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:16.414 11:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.414 11:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:16.414 11:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.414 11:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:16.414 [2024-11-19 11:36:30.063986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:16.414 [2024-11-19 11:36:30.064070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc13220 with addr=10.0.0.2, port=4420 00:25:16.414 [2024-11-19 11:36:30.064106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc13220 is same with the state(6) to be set 00:25:16.414 [2024-11-19 11:36:30.064167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc13220 (9): Bad file descriptor 00:25:16.414 [2024-11-19 11:36:30.065147] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:16.414 [2024-11-19 11:36:30.065214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:16.414 [2024-11-19 11:36:30.065238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:16.414 [2024-11-19 11:36:30.065262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:16.414 [2024-11-19 11:36:30.065282] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:16.414 [2024-11-19 11:36:30.065298] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:16.414 [2024-11-19 11:36:30.065312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:16.414 [2024-11-19 11:36:30.065334] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:16.414 [2024-11-19 11:36:30.065350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:16.414 11:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.414 11:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:16.414 11:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:17.351 [2024-11-19 11:36:31.067873] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:17.351 [2024-11-19 11:36:31.067895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:17.351 [2024-11-19 11:36:31.067907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:17.351 [2024-11-19 11:36:31.067914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:17.351 [2024-11-19 11:36:31.067922] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:17.351 [2024-11-19 11:36:31.067944] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:17.351 [2024-11-19 11:36:31.067955] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:17.351 [2024-11-19 11:36:31.067959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:17.351 [2024-11-19 11:36:31.067981] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:17.351 [2024-11-19 11:36:31.068004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.351 [2024-11-19 11:36:31.068014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.351 [2024-11-19 11:36:31.068030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.351 [2024-11-19 11:36:31.068038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.351 [2024-11-19 11:36:31.068045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.351 [2024-11-19 11:36:31.068052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.351 [2024-11-19 11:36:31.068059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.351 [2024-11-19 11:36:31.068066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.351 [2024-11-19 11:36:31.068074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.351 [2024-11-19 11:36:31.068081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.351 [2024-11-19 11:36:31.068088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:17.351 [2024-11-19 11:36:31.068485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc02900 (9): Bad file descriptor 00:25:17.351 [2024-11-19 11:36:31.069497] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:17.351 [2024-11-19 11:36:31.069509] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:17.351 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.351 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.351 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.351 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.351 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.351 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.351 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.351 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:17.610 11:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:18.546 11:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.546 11:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.546 11:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.546 11:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.546 11:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.546 11:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.546 11:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.546 11:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.546 11:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:18.546 11:36:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.484 [2024-11-19 11:36:33.127426] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:19.484 [2024-11-19 11:36:33.127444] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:19.484 [2024-11-19 11:36:33.127455] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:19.484 [2024-11-19 11:36:33.253849] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:19.743 [2024-11-19 11:36:33.315414] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:19.743 [2024-11-19 11:36:33.315984] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xc0dfd0:1 started. 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.743 [2024-11-19 11:36:33.317053] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:19.743 [2024-11-19 11:36:33.317084] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:19.743 [2024-11-19 11:36:33.317100] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:19.743 [2024-11-19 11:36:33.317113] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:19.743 [2024-11-19 11:36:33.317120] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.743 [2024-11-19 11:36:33.325113] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xc0dfd0 was disconnected and freed. delete nvme_qpair. 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2381473 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2381473 ']' 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2381473 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2381473 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2381473' 00:25:19.743 killing process with pid 2381473 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2381473 00:25:19.743 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2381473 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:20.002 rmmod nvme_tcp 00:25:20.002 rmmod nvme_fabrics 00:25:20.002 rmmod nvme_keyring 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2381311 ']' 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2381311 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2381311 ']' 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2381311 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2381311 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2381311' 00:25:20.002 killing process with pid 2381311 00:25:20.002 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2381311 00:25:20.003 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2381311 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.262 11:36:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.167 11:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.167 00:25:22.167 real 0m20.413s 00:25:22.167 user 0m24.580s 00:25:22.167 sys 0m5.845s 00:25:22.167 11:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.167 11:36:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.167 ************************************ 00:25:22.167 END TEST nvmf_discovery_remove_ifc 00:25:22.167 ************************************ 00:25:22.427 11:36:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:22.427 11:36:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:22.427 11:36:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.427 11:36:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.427 ************************************ 00:25:22.427 START TEST nvmf_identify_kernel_target 00:25:22.427 ************************************ 00:25:22.427 11:36:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:22.427 * Looking for test storage... 00:25:22.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.427 --rc genhtml_branch_coverage=1 00:25:22.427 --rc genhtml_function_coverage=1 00:25:22.427 --rc genhtml_legend=1 00:25:22.427 --rc geninfo_all_blocks=1 00:25:22.427 --rc geninfo_unexecuted_blocks=1 00:25:22.427 00:25:22.427 ' 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.427 --rc genhtml_branch_coverage=1 00:25:22.427 --rc genhtml_function_coverage=1 00:25:22.427 --rc genhtml_legend=1 00:25:22.427 --rc geninfo_all_blocks=1 00:25:22.427 --rc geninfo_unexecuted_blocks=1 00:25:22.427 00:25:22.427 ' 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.427 --rc genhtml_branch_coverage=1 00:25:22.427 --rc genhtml_function_coverage=1 00:25:22.427 --rc genhtml_legend=1 00:25:22.427 --rc geninfo_all_blocks=1 00:25:22.427 --rc geninfo_unexecuted_blocks=1 00:25:22.427 00:25:22.427 ' 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.427 --rc genhtml_branch_coverage=1 00:25:22.427 --rc genhtml_function_coverage=1 00:25:22.427 --rc genhtml_legend=1 00:25:22.427 --rc geninfo_all_blocks=1 00:25:22.427 --rc geninfo_unexecuted_blocks=1 00:25:22.427 00:25:22.427 ' 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.427 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.428 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:22.687 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:22.687 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.687 11:36:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:29.256 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:29.256 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.256 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:29.257 Found net devices under 0000:86:00.0: cvl_0_0 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:29.257 Found net devices under 0000:86:00.1: cvl_0_1 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:29.257 11:36:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:29.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:25:29.257 00:25:29.257 --- 10.0.0.2 ping statistics --- 00:25:29.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.257 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:29.257 00:25:29.257 --- 10.0.0.1 ping statistics --- 00:25:29.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.257 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:29.257 11:36:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:31.162 Waiting for block devices as requested 00:25:31.162 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:31.421 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:31.421 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:31.421 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:31.680 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:31.680 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:31.680 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:31.680 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:31.939 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:31.939 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:31.939 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:32.197 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:32.197 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:32.197 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:32.456 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:32.456 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:32.456 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:32.456 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:32.456 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:32.456 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:32.456 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:32.456 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:32.456 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:32.456 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:32.456 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:32.714 No valid GPT data, bailing 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:32.714 00:25:32.714 Discovery Log Number of Records 2, Generation counter 2 00:25:32.714 =====Discovery Log Entry 0====== 00:25:32.714 trtype: tcp 00:25:32.714 adrfam: ipv4 00:25:32.714 subtype: current discovery subsystem 00:25:32.714 treq: not specified, sq flow control disable supported 00:25:32.714 portid: 1 00:25:32.714 trsvcid: 4420 00:25:32.714 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:32.714 traddr: 10.0.0.1 00:25:32.714 eflags: none 00:25:32.714 sectype: none 00:25:32.714 =====Discovery Log Entry 1====== 00:25:32.714 trtype: tcp 00:25:32.714 adrfam: ipv4 00:25:32.714 subtype: nvme subsystem 00:25:32.714 treq: not specified, sq flow control disable supported 00:25:32.714 portid: 1 00:25:32.714 trsvcid: 4420 00:25:32.714 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:32.714 traddr: 10.0.0.1 00:25:32.714 eflags: none 00:25:32.714 sectype: none 00:25:32.714 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:32.714 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:32.714 ===================================================== 00:25:32.714 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:32.714 ===================================================== 00:25:32.714 Controller Capabilities/Features 00:25:32.714 ================================ 00:25:32.714 Vendor ID: 0000 00:25:32.714 Subsystem Vendor ID: 0000 00:25:32.714 Serial Number: d57dc0d677a0c04bfe4c 00:25:32.714 Model Number: Linux 00:25:32.714 Firmware Version: 6.8.9-20 00:25:32.714 Recommended Arb Burst: 0 00:25:32.714 IEEE OUI Identifier: 00 00 00 00:25:32.714 Multi-path I/O 00:25:32.714 May have multiple subsystem ports: No 00:25:32.714 May have multiple controllers: No 00:25:32.714 Associated with SR-IOV VF: No 00:25:32.714 Max Data Transfer Size: Unlimited 00:25:32.714 Max Number of Namespaces: 0 00:25:32.714 Max Number of I/O Queues: 1024 00:25:32.714 NVMe Specification Version (VS): 1.3 00:25:32.714 NVMe Specification Version (Identify): 1.3 00:25:32.714 Maximum Queue Entries: 1024 00:25:32.714 Contiguous Queues Required: No 00:25:32.714 Arbitration Mechanisms Supported 00:25:32.714 Weighted Round Robin: Not Supported 00:25:32.714 Vendor Specific: Not Supported 00:25:32.714 Reset Timeout: 7500 ms 00:25:32.714 Doorbell Stride: 4 bytes 00:25:32.714 NVM Subsystem Reset: Not Supported 00:25:32.714 Command Sets Supported 00:25:32.714 NVM Command Set: Supported 00:25:32.714 Boot Partition: Not Supported 00:25:32.714 Memory Page Size Minimum: 4096 bytes 00:25:32.714 Memory Page Size Maximum: 4096 bytes 00:25:32.714 Persistent Memory Region: Not Supported 00:25:32.714 Optional Asynchronous Events Supported 00:25:32.714 Namespace Attribute Notices: Not Supported 00:25:32.715 Firmware Activation Notices: Not Supported 00:25:32.715 ANA Change Notices: Not Supported 00:25:32.715 PLE Aggregate Log Change Notices: Not Supported 00:25:32.715 LBA Status Info Alert Notices: Not Supported 00:25:32.715 EGE Aggregate Log Change Notices: Not Supported 00:25:32.715 Normal NVM Subsystem Shutdown event: Not Supported 00:25:32.715 Zone Descriptor Change Notices: Not Supported 00:25:32.715 Discovery Log Change Notices: Supported 00:25:32.715 Controller Attributes 00:25:32.715 128-bit Host Identifier: Not Supported 00:25:32.715 Non-Operational Permissive Mode: Not Supported 00:25:32.715 NVM Sets: Not Supported 00:25:32.715 Read Recovery Levels: Not Supported 00:25:32.715 Endurance Groups: Not Supported 00:25:32.715 Predictable Latency Mode: Not Supported 00:25:32.715 Traffic Based Keep ALive: Not Supported 00:25:32.715 Namespace Granularity: Not Supported 00:25:32.715 SQ Associations: Not Supported 00:25:32.715 UUID List: Not Supported 00:25:32.715 Multi-Domain Subsystem: Not Supported 00:25:32.715 Fixed Capacity Management: Not Supported 00:25:32.715 Variable Capacity Management: Not Supported 00:25:32.715 Delete Endurance Group: Not Supported 00:25:32.715 Delete NVM Set: Not Supported 00:25:32.715 Extended LBA Formats Supported: Not Supported 00:25:32.715 Flexible Data Placement Supported: Not Supported 00:25:32.715 00:25:32.715 Controller Memory Buffer Support 00:25:32.715 ================================ 00:25:32.715 Supported: No 00:25:32.715 00:25:32.715 Persistent Memory Region Support 00:25:32.715 ================================ 00:25:32.715 Supported: No 00:25:32.715 00:25:32.715 Admin Command Set Attributes 00:25:32.715 ============================ 00:25:32.715 Security Send/Receive: Not Supported 00:25:32.715 Format NVM: Not Supported 00:25:32.715 Firmware Activate/Download: Not Supported 00:25:32.715 Namespace Management: Not Supported 00:25:32.715 Device Self-Test: Not Supported 00:25:32.715 Directives: Not Supported 00:25:32.715 NVMe-MI: Not Supported 00:25:32.715 Virtualization Management: Not Supported 00:25:32.715 Doorbell Buffer Config: Not Supported 00:25:32.715 Get LBA Status Capability: Not Supported 00:25:32.715 Command & Feature Lockdown Capability: Not Supported 00:25:32.715 Abort Command Limit: 1 00:25:32.715 Async Event Request Limit: 1 00:25:32.715 Number of Firmware Slots: N/A 00:25:32.715 Firmware Slot 1 Read-Only: N/A 00:25:32.974 Firmware Activation Without Reset: N/A 00:25:32.974 Multiple Update Detection Support: N/A 00:25:32.974 Firmware Update Granularity: No Information Provided 00:25:32.974 Per-Namespace SMART Log: No 00:25:32.974 Asymmetric Namespace Access Log Page: Not Supported 00:25:32.974 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:32.974 Command Effects Log Page: Not Supported 00:25:32.974 Get Log Page Extended Data: Supported 00:25:32.974 Telemetry Log Pages: Not Supported 00:25:32.974 Persistent Event Log Pages: Not Supported 00:25:32.974 Supported Log Pages Log Page: May Support 00:25:32.974 Commands Supported & Effects Log Page: Not Supported 00:25:32.974 Feature Identifiers & Effects Log Page:May Support 00:25:32.974 NVMe-MI Commands & Effects Log Page: May Support 00:25:32.974 Data Area 4 for Telemetry Log: Not Supported 00:25:32.974 Error Log Page Entries Supported: 1 00:25:32.974 Keep Alive: Not Supported 00:25:32.974 00:25:32.974 NVM Command Set Attributes 00:25:32.974 ========================== 00:25:32.974 Submission Queue Entry Size 00:25:32.974 Max: 1 00:25:32.974 Min: 1 00:25:32.974 Completion Queue Entry Size 00:25:32.974 Max: 1 00:25:32.974 Min: 1 00:25:32.974 Number of Namespaces: 0 00:25:32.974 Compare Command: Not Supported 00:25:32.974 Write Uncorrectable Command: Not Supported 00:25:32.974 Dataset Management Command: Not Supported 00:25:32.975 Write Zeroes Command: Not Supported 00:25:32.975 Set Features Save Field: Not Supported 00:25:32.975 Reservations: Not Supported 00:25:32.975 Timestamp: Not Supported 00:25:32.975 Copy: Not Supported 00:25:32.975 Volatile Write Cache: Not Present 00:25:32.975 Atomic Write Unit (Normal): 1 00:25:32.975 Atomic Write Unit (PFail): 1 00:25:32.975 Atomic Compare & Write Unit: 1 00:25:32.975 Fused Compare & Write: Not Supported 00:25:32.975 Scatter-Gather List 00:25:32.975 SGL Command Set: Supported 00:25:32.975 SGL Keyed: Not Supported 00:25:32.975 SGL Bit Bucket Descriptor: Not Supported 00:25:32.975 SGL Metadata Pointer: Not Supported 00:25:32.975 Oversized SGL: Not Supported 00:25:32.975 SGL Metadata Address: Not Supported 00:25:32.975 SGL Offset: Supported 00:25:32.975 Transport SGL Data Block: Not Supported 00:25:32.975 Replay Protected Memory Block: Not Supported 00:25:32.975 00:25:32.975 Firmware Slot Information 00:25:32.975 ========================= 00:25:32.975 Active slot: 0 00:25:32.975 00:25:32.975 00:25:32.975 Error Log 00:25:32.975 ========= 00:25:32.975 00:25:32.975 Active Namespaces 00:25:32.975 ================= 00:25:32.975 Discovery Log Page 00:25:32.975 ================== 00:25:32.975 Generation Counter: 2 00:25:32.975 Number of Records: 2 00:25:32.975 Record Format: 0 00:25:32.975 00:25:32.975 Discovery Log Entry 0 00:25:32.975 ---------------------- 00:25:32.975 Transport Type: 3 (TCP) 00:25:32.975 Address Family: 1 (IPv4) 00:25:32.975 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:32.975 Entry Flags: 00:25:32.975 Duplicate Returned Information: 0 00:25:32.975 Explicit Persistent Connection Support for Discovery: 0 00:25:32.975 Transport Requirements: 00:25:32.975 Secure Channel: Not Specified 00:25:32.975 Port ID: 1 (0x0001) 00:25:32.975 Controller ID: 65535 (0xffff) 00:25:32.975 Admin Max SQ Size: 32 00:25:32.975 Transport Service Identifier: 4420 00:25:32.975 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:32.975 Transport Address: 10.0.0.1 00:25:32.975 Discovery Log Entry 1 00:25:32.975 ---------------------- 00:25:32.975 Transport Type: 3 (TCP) 00:25:32.975 Address Family: 1 (IPv4) 00:25:32.975 Subsystem Type: 2 (NVM Subsystem) 00:25:32.975 Entry Flags: 00:25:32.975 Duplicate Returned Information: 0 00:25:32.975 Explicit Persistent Connection Support for Discovery: 0 00:25:32.975 Transport Requirements: 00:25:32.975 Secure Channel: Not Specified 00:25:32.975 Port ID: 1 (0x0001) 00:25:32.975 Controller ID: 65535 (0xffff) 00:25:32.975 Admin Max SQ Size: 32 00:25:32.975 Transport Service Identifier: 4420 00:25:32.975 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:32.975 Transport Address: 10.0.0.1 00:25:32.975 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:32.975 get_feature(0x01) failed 00:25:32.975 get_feature(0x02) failed 00:25:32.975 get_feature(0x04) failed 00:25:32.975 ===================================================== 00:25:32.975 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:32.975 ===================================================== 00:25:32.975 Controller Capabilities/Features 00:25:32.975 ================================ 00:25:32.975 Vendor ID: 0000 00:25:32.975 Subsystem Vendor ID: 0000 00:25:32.975 Serial Number: d4c1370b38f405c61010 00:25:32.975 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:32.975 Firmware Version: 6.8.9-20 00:25:32.975 Recommended Arb Burst: 6 00:25:32.975 IEEE OUI Identifier: 00 00 00 00:25:32.975 Multi-path I/O 00:25:32.975 May have multiple subsystem ports: Yes 00:25:32.975 May have multiple controllers: Yes 00:25:32.975 Associated with SR-IOV VF: No 00:25:32.975 Max Data Transfer Size: Unlimited 00:25:32.975 Max Number of Namespaces: 1024 00:25:32.975 Max Number of I/O Queues: 128 00:25:32.975 NVMe Specification Version (VS): 1.3 00:25:32.975 NVMe Specification Version (Identify): 1.3 00:25:32.975 Maximum Queue Entries: 1024 00:25:32.975 Contiguous Queues Required: No 00:25:32.975 Arbitration Mechanisms Supported 00:25:32.975 Weighted Round Robin: Not Supported 00:25:32.975 Vendor Specific: Not Supported 00:25:32.975 Reset Timeout: 7500 ms 00:25:32.975 Doorbell Stride: 4 bytes 00:25:32.975 NVM Subsystem Reset: Not Supported 00:25:32.975 Command Sets Supported 00:25:32.975 NVM Command Set: Supported 00:25:32.975 Boot Partition: Not Supported 00:25:32.975 Memory Page Size Minimum: 4096 bytes 00:25:32.975 Memory Page Size Maximum: 4096 bytes 00:25:32.975 Persistent Memory Region: Not Supported 00:25:32.975 Optional Asynchronous Events Supported 00:25:32.975 Namespace Attribute Notices: Supported 00:25:32.975 Firmware Activation Notices: Not Supported 00:25:32.975 ANA Change Notices: Supported 00:25:32.975 PLE Aggregate Log Change Notices: Not Supported 00:25:32.975 LBA Status Info Alert Notices: Not Supported 00:25:32.975 EGE Aggregate Log Change Notices: Not Supported 00:25:32.975 Normal NVM Subsystem Shutdown event: Not Supported 00:25:32.975 Zone Descriptor Change Notices: Not Supported 00:25:32.975 Discovery Log Change Notices: Not Supported 00:25:32.975 Controller Attributes 00:25:32.975 128-bit Host Identifier: Supported 00:25:32.975 Non-Operational Permissive Mode: Not Supported 00:25:32.975 NVM Sets: Not Supported 00:25:32.975 Read Recovery Levels: Not Supported 00:25:32.975 Endurance Groups: Not Supported 00:25:32.975 Predictable Latency Mode: Not Supported 00:25:32.975 Traffic Based Keep ALive: Supported 00:25:32.975 Namespace Granularity: Not Supported 00:25:32.975 SQ Associations: Not Supported 00:25:32.975 UUID List: Not Supported 00:25:32.975 Multi-Domain Subsystem: Not Supported 00:25:32.975 Fixed Capacity Management: Not Supported 00:25:32.975 Variable Capacity Management: Not Supported 00:25:32.975 Delete Endurance Group: Not Supported 00:25:32.975 Delete NVM Set: Not Supported 00:25:32.975 Extended LBA Formats Supported: Not Supported 00:25:32.975 Flexible Data Placement Supported: Not Supported 00:25:32.975 00:25:32.975 Controller Memory Buffer Support 00:25:32.975 ================================ 00:25:32.975 Supported: No 00:25:32.975 00:25:32.975 Persistent Memory Region Support 00:25:32.975 ================================ 00:25:32.975 Supported: No 00:25:32.975 00:25:32.975 Admin Command Set Attributes 00:25:32.975 ============================ 00:25:32.975 Security Send/Receive: Not Supported 00:25:32.975 Format NVM: Not Supported 00:25:32.975 Firmware Activate/Download: Not Supported 00:25:32.975 Namespace Management: Not Supported 00:25:32.975 Device Self-Test: Not Supported 00:25:32.975 Directives: Not Supported 00:25:32.975 NVMe-MI: Not Supported 00:25:32.975 Virtualization Management: Not Supported 00:25:32.975 Doorbell Buffer Config: Not Supported 00:25:32.975 Get LBA Status Capability: Not Supported 00:25:32.975 Command & Feature Lockdown Capability: Not Supported 00:25:32.975 Abort Command Limit: 4 00:25:32.975 Async Event Request Limit: 4 00:25:32.975 Number of Firmware Slots: N/A 00:25:32.975 Firmware Slot 1 Read-Only: N/A 00:25:32.975 Firmware Activation Without Reset: N/A 00:25:32.975 Multiple Update Detection Support: N/A 00:25:32.975 Firmware Update Granularity: No Information Provided 00:25:32.975 Per-Namespace SMART Log: Yes 00:25:32.975 Asymmetric Namespace Access Log Page: Supported 00:25:32.975 ANA Transition Time : 10 sec 00:25:32.975 00:25:32.975 Asymmetric Namespace Access Capabilities 00:25:32.975 ANA Optimized State : Supported 00:25:32.975 ANA Non-Optimized State : Supported 00:25:32.975 ANA Inaccessible State : Supported 00:25:32.975 ANA Persistent Loss State : Supported 00:25:32.975 ANA Change State : Supported 00:25:32.975 ANAGRPID is not changed : No 00:25:32.975 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:32.975 00:25:32.975 ANA Group Identifier Maximum : 128 00:25:32.975 Number of ANA Group Identifiers : 128 00:25:32.975 Max Number of Allowed Namespaces : 1024 00:25:32.975 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:32.975 Command Effects Log Page: Supported 00:25:32.975 Get Log Page Extended Data: Supported 00:25:32.976 Telemetry Log Pages: Not Supported 00:25:32.976 Persistent Event Log Pages: Not Supported 00:25:32.976 Supported Log Pages Log Page: May Support 00:25:32.976 Commands Supported & Effects Log Page: Not Supported 00:25:32.976 Feature Identifiers & Effects Log Page:May Support 00:25:32.976 NVMe-MI Commands & Effects Log Page: May Support 00:25:32.976 Data Area 4 for Telemetry Log: Not Supported 00:25:32.976 Error Log Page Entries Supported: 128 00:25:32.976 Keep Alive: Supported 00:25:32.976 Keep Alive Granularity: 1000 ms 00:25:32.976 00:25:32.976 NVM Command Set Attributes 00:25:32.976 ========================== 00:25:32.976 Submission Queue Entry Size 00:25:32.976 Max: 64 00:25:32.976 Min: 64 00:25:32.976 Completion Queue Entry Size 00:25:32.976 Max: 16 00:25:32.976 Min: 16 00:25:32.976 Number of Namespaces: 1024 00:25:32.976 Compare Command: Not Supported 00:25:32.976 Write Uncorrectable Command: Not Supported 00:25:32.976 Dataset Management Command: Supported 00:25:32.976 Write Zeroes Command: Supported 00:25:32.976 Set Features Save Field: Not Supported 00:25:32.976 Reservations: Not Supported 00:25:32.976 Timestamp: Not Supported 00:25:32.976 Copy: Not Supported 00:25:32.976 Volatile Write Cache: Present 00:25:32.976 Atomic Write Unit (Normal): 1 00:25:32.976 Atomic Write Unit (PFail): 1 00:25:32.976 Atomic Compare & Write Unit: 1 00:25:32.976 Fused Compare & Write: Not Supported 00:25:32.976 Scatter-Gather List 00:25:32.976 SGL Command Set: Supported 00:25:32.976 SGL Keyed: Not Supported 00:25:32.976 SGL Bit Bucket Descriptor: Not Supported 00:25:32.976 SGL Metadata Pointer: Not Supported 00:25:32.976 Oversized SGL: Not Supported 00:25:32.976 SGL Metadata Address: Not Supported 00:25:32.976 SGL Offset: Supported 00:25:32.976 Transport SGL Data Block: Not Supported 00:25:32.976 Replay Protected Memory Block: Not Supported 00:25:32.976 00:25:32.976 Firmware Slot Information 00:25:32.976 ========================= 00:25:32.976 Active slot: 0 00:25:32.976 00:25:32.976 Asymmetric Namespace Access 00:25:32.976 =========================== 00:25:32.976 Change Count : 0 00:25:32.976 Number of ANA Group Descriptors : 1 00:25:32.976 ANA Group Descriptor : 0 00:25:32.976 ANA Group ID : 1 00:25:32.976 Number of NSID Values : 1 00:25:32.976 Change Count : 0 00:25:32.976 ANA State : 1 00:25:32.976 Namespace Identifier : 1 00:25:32.976 00:25:32.976 Commands Supported and Effects 00:25:32.976 ============================== 00:25:32.976 Admin Commands 00:25:32.976 -------------- 00:25:32.976 Get Log Page (02h): Supported 00:25:32.976 Identify (06h): Supported 00:25:32.976 Abort (08h): Supported 00:25:32.976 Set Features (09h): Supported 00:25:32.976 Get Features (0Ah): Supported 00:25:32.976 Asynchronous Event Request (0Ch): Supported 00:25:32.976 Keep Alive (18h): Supported 00:25:32.976 I/O Commands 00:25:32.976 ------------ 00:25:32.976 Flush (00h): Supported 00:25:32.976 Write (01h): Supported LBA-Change 00:25:32.976 Read (02h): Supported 00:25:32.976 Write Zeroes (08h): Supported LBA-Change 00:25:32.976 Dataset Management (09h): Supported 00:25:32.976 00:25:32.976 Error Log 00:25:32.976 ========= 00:25:32.976 Entry: 0 00:25:32.976 Error Count: 0x3 00:25:32.976 Submission Queue Id: 0x0 00:25:32.976 Command Id: 0x5 00:25:32.976 Phase Bit: 0 00:25:32.976 Status Code: 0x2 00:25:32.976 Status Code Type: 0x0 00:25:32.976 Do Not Retry: 1 00:25:32.976 Error Location: 0x28 00:25:32.976 LBA: 0x0 00:25:32.976 Namespace: 0x0 00:25:32.976 Vendor Log Page: 0x0 00:25:32.976 ----------- 00:25:32.976 Entry: 1 00:25:32.976 Error Count: 0x2 00:25:32.976 Submission Queue Id: 0x0 00:25:32.976 Command Id: 0x5 00:25:32.976 Phase Bit: 0 00:25:32.976 Status Code: 0x2 00:25:32.976 Status Code Type: 0x0 00:25:32.976 Do Not Retry: 1 00:25:32.976 Error Location: 0x28 00:25:32.976 LBA: 0x0 00:25:32.976 Namespace: 0x0 00:25:32.976 Vendor Log Page: 0x0 00:25:32.976 ----------- 00:25:32.976 Entry: 2 00:25:32.976 Error Count: 0x1 00:25:32.976 Submission Queue Id: 0x0 00:25:32.976 Command Id: 0x4 00:25:32.976 Phase Bit: 0 00:25:32.976 Status Code: 0x2 00:25:32.976 Status Code Type: 0x0 00:25:32.976 Do Not Retry: 1 00:25:32.976 Error Location: 0x28 00:25:32.976 LBA: 0x0 00:25:32.976 Namespace: 0x0 00:25:32.976 Vendor Log Page: 0x0 00:25:32.976 00:25:32.976 Number of Queues 00:25:32.976 ================ 00:25:32.976 Number of I/O Submission Queues: 128 00:25:32.976 Number of I/O Completion Queues: 128 00:25:32.976 00:25:32.976 ZNS Specific Controller Data 00:25:32.976 ============================ 00:25:32.976 Zone Append Size Limit: 0 00:25:32.976 00:25:32.976 00:25:32.976 Active Namespaces 00:25:32.976 ================= 00:25:32.976 get_feature(0x05) failed 00:25:32.976 Namespace ID:1 00:25:32.976 Command Set Identifier: NVM (00h) 00:25:32.976 Deallocate: Supported 00:25:32.976 Deallocated/Unwritten Error: Not Supported 00:25:32.976 Deallocated Read Value: Unknown 00:25:32.976 Deallocate in Write Zeroes: Not Supported 00:25:32.976 Deallocated Guard Field: 0xFFFF 00:25:32.976 Flush: Supported 00:25:32.976 Reservation: Not Supported 00:25:32.976 Namespace Sharing Capabilities: Multiple Controllers 00:25:32.976 Size (in LBAs): 1953525168 (931GiB) 00:25:32.976 Capacity (in LBAs): 1953525168 (931GiB) 00:25:32.976 Utilization (in LBAs): 1953525168 (931GiB) 00:25:32.976 UUID: 251f81ce-93af-4caa-8ac0-f4f972434a5e 00:25:32.976 Thin Provisioning: Not Supported 00:25:32.976 Per-NS Atomic Units: Yes 00:25:32.976 Atomic Boundary Size (Normal): 0 00:25:32.976 Atomic Boundary Size (PFail): 0 00:25:32.976 Atomic Boundary Offset: 0 00:25:32.976 NGUID/EUI64 Never Reused: No 00:25:32.976 ANA group ID: 1 00:25:32.976 Namespace Write Protected: No 00:25:32.976 Number of LBA Formats: 1 00:25:32.976 Current LBA Format: LBA Format #00 00:25:32.976 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:32.976 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:32.976 rmmod nvme_tcp 00:25:32.976 rmmod nvme_fabrics 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.976 11:36:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.513 11:36:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:35.513 11:36:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:35.513 11:36:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:35.513 11:36:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:35.513 11:36:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:35.513 11:36:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:35.513 11:36:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:35.513 11:36:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:35.513 11:36:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:35.513 11:36:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:35.513 11:36:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:38.046 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:38.046 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:38.046 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:38.046 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:38.046 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:38.046 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:38.047 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:38.047 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:38.047 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:38.047 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:38.047 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:38.047 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:38.047 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:38.047 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:38.047 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:38.047 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:38.983 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:38.983 00:25:38.983 real 0m16.695s 00:25:38.983 user 0m4.350s 00:25:38.983 sys 0m8.746s 00:25:38.983 11:36:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.983 11:36:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.983 ************************************ 00:25:38.983 END TEST nvmf_identify_kernel_target 00:25:38.983 ************************************ 00:25:38.983 11:36:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:38.983 11:36:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:38.983 11:36:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.983 11:36:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.983 ************************************ 00:25:38.983 START TEST nvmf_auth_host 00:25:38.983 ************************************ 00:25:38.983 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:39.243 * Looking for test storage... 00:25:39.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:39.243 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:39.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.244 --rc genhtml_branch_coverage=1 00:25:39.244 --rc genhtml_function_coverage=1 00:25:39.244 --rc genhtml_legend=1 00:25:39.244 --rc geninfo_all_blocks=1 00:25:39.244 --rc geninfo_unexecuted_blocks=1 00:25:39.244 00:25:39.244 ' 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:39.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.244 --rc genhtml_branch_coverage=1 00:25:39.244 --rc genhtml_function_coverage=1 00:25:39.244 --rc genhtml_legend=1 00:25:39.244 --rc geninfo_all_blocks=1 00:25:39.244 --rc geninfo_unexecuted_blocks=1 00:25:39.244 00:25:39.244 ' 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:39.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.244 --rc genhtml_branch_coverage=1 00:25:39.244 --rc genhtml_function_coverage=1 00:25:39.244 --rc genhtml_legend=1 00:25:39.244 --rc geninfo_all_blocks=1 00:25:39.244 --rc geninfo_unexecuted_blocks=1 00:25:39.244 00:25:39.244 ' 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:39.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.244 --rc genhtml_branch_coverage=1 00:25:39.244 --rc genhtml_function_coverage=1 00:25:39.244 --rc genhtml_legend=1 00:25:39.244 --rc geninfo_all_blocks=1 00:25:39.244 --rc geninfo_unexecuted_blocks=1 00:25:39.244 00:25:39.244 ' 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.244 11:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:45.813 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:45.813 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:45.813 Found net devices under 0000:86:00.0: cvl_0_0 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:45.813 Found net devices under 0000:86:00.1: cvl_0_1 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.813 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:45.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:25:45.814 00:25:45.814 --- 10.0.0.2 ping statistics --- 00:25:45.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.814 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:25:45.814 00:25:45.814 --- 10.0.0.1 ping statistics --- 00:25:45.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.814 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2393255 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2393255 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2393255 ']' 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.814 11:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=319d0afa796d33a2237c568ff1dc0320 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.CH6 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 319d0afa796d33a2237c568ff1dc0320 0 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 319d0afa796d33a2237c568ff1dc0320 0 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=319d0afa796d33a2237c568ff1dc0320 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.CH6 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.CH6 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.CH6 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d9333812999a12c16e6caab7a114e037fb43cca1f9b2e162064dd2a121246b8f 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xZ4 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d9333812999a12c16e6caab7a114e037fb43cca1f9b2e162064dd2a121246b8f 3 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d9333812999a12c16e6caab7a114e037fb43cca1f9b2e162064dd2a121246b8f 3 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d9333812999a12c16e6caab7a114e037fb43cca1f9b2e162064dd2a121246b8f 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xZ4 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xZ4 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xZ4 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=92b57327a855b3b947f73b06fb5350ec4c3d640984e87851 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.NFr 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 92b57327a855b3b947f73b06fb5350ec4c3d640984e87851 0 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 92b57327a855b3b947f73b06fb5350ec4c3d640984e87851 0 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:45.814 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=92b57327a855b3b947f73b06fb5350ec4c3d640984e87851 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.NFr 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.NFr 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.NFr 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d83aff9bc002a164a54bf4cbc11ffc34c442ebfc857ca386 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yzh 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d83aff9bc002a164a54bf4cbc11ffc34c442ebfc857ca386 2 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d83aff9bc002a164a54bf4cbc11ffc34c442ebfc857ca386 2 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d83aff9bc002a164a54bf4cbc11ffc34c442ebfc857ca386 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yzh 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yzh 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.yzh 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=89b157dde281753593410f67de05537a 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.63E 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 89b157dde281753593410f67de05537a 1 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 89b157dde281753593410f67de05537a 1 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=89b157dde281753593410f67de05537a 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.63E 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.63E 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.63E 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d6e99e9e45f1be471a14dff3cb069a52 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cnP 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d6e99e9e45f1be471a14dff3cb069a52 1 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d6e99e9e45f1be471a14dff3cb069a52 1 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d6e99e9e45f1be471a14dff3cb069a52 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:45.815 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cnP 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cnP 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.cnP 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=88637d28bf718381df9b51d8531957e3181429c2fee32a60 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.dHZ 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 88637d28bf718381df9b51d8531957e3181429c2fee32a60 2 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 88637d28bf718381df9b51d8531957e3181429c2fee32a60 2 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=88637d28bf718381df9b51d8531957e3181429c2fee32a60 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.dHZ 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.dHZ 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.dHZ 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a7379c7230f5ee4efec341baa4a690cd 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8c3 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a7379c7230f5ee4efec341baa4a690cd 0 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a7379c7230f5ee4efec341baa4a690cd 0 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a7379c7230f5ee4efec341baa4a690cd 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8c3 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8c3 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.8c3 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e264b64712015710073596ab326d8359626c36493ef2c1613c616ecbc56ad91f 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.rwe 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e264b64712015710073596ab326d8359626c36493ef2c1613c616ecbc56ad91f 3 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e264b64712015710073596ab326d8359626c36493ef2c1613c616ecbc56ad91f 3 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e264b64712015710073596ab326d8359626c36493ef2c1613c616ecbc56ad91f 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.rwe 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.rwe 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.rwe 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2393255 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2393255 ']' 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.075 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CH6 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xZ4 ]] 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xZ4 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.336 11:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NFr 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.yzh ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yzh 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.63E 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.cnP ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cnP 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.dHZ 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.8c3 ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.8c3 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.rwe 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:46.336 11:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:49.627 Waiting for block devices as requested 00:25:49.627 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:49.627 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:49.627 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:49.627 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:49.627 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:49.627 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:49.627 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:49.627 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:49.627 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:49.886 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:49.886 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:49.886 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:49.886 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:50.146 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:50.146 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:50.146 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:50.405 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:50.974 No valid GPT data, bailing 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:50.974 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:50.975 00:25:50.975 Discovery Log Number of Records 2, Generation counter 2 00:25:50.975 =====Discovery Log Entry 0====== 00:25:50.975 trtype: tcp 00:25:50.975 adrfam: ipv4 00:25:50.975 subtype: current discovery subsystem 00:25:50.975 treq: not specified, sq flow control disable supported 00:25:50.975 portid: 1 00:25:50.975 trsvcid: 4420 00:25:50.975 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:50.975 traddr: 10.0.0.1 00:25:50.975 eflags: none 00:25:50.975 sectype: none 00:25:50.975 =====Discovery Log Entry 1====== 00:25:50.975 trtype: tcp 00:25:50.975 adrfam: ipv4 00:25:50.975 subtype: nvme subsystem 00:25:50.975 treq: not specified, sq flow control disable supported 00:25:50.975 portid: 1 00:25:50.975 trsvcid: 4420 00:25:50.975 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:50.975 traddr: 10.0.0.1 00:25:50.975 eflags: none 00:25:50.975 sectype: none 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.975 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.235 nvme0n1 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.235 11:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.494 nvme0n1 00:25:51.494 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.494 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.494 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.494 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.494 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.494 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.494 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.494 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.494 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.494 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.495 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.754 nvme0n1 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.754 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.755 nvme0n1 00:25:51.755 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 nvme0n1 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.275 nvme0n1 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.275 11:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.275 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.275 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.275 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.275 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.275 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.275 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.275 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.275 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:52.275 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.275 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.276 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.535 nvme0n1 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.535 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.795 nvme0n1 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.795 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.796 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.055 nvme0n1 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.055 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:25:53.056 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:53.314 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.315 11:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.315 nvme0n1 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.315 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.574 nvme0n1 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.574 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.845 nvme0n1 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.845 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.105 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.105 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.105 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.105 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.105 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.105 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.105 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:54.105 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.105 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.106 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.366 nvme0n1 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.366 11:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.626 nvme0n1 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.626 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.886 nvme0n1 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.886 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.146 nvme0n1 00:25:55.146 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.146 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.146 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.146 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.146 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.146 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.405 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.406 11:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.665 nvme0n1 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:55.665 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.666 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.345 nvme0n1 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.345 11:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.651 nvme0n1 00:25:56.651 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.651 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.651 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.651 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.651 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.651 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.652 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.219 nvme0n1 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.219 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.220 11:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.479 nvme0n1 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.479 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.480 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.480 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.480 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.416 nvme0n1 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.416 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.417 11:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.985 nvme0n1 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.985 11:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.553 nvme0n1 00:25:59.553 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.553 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.553 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.553 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.553 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.553 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.553 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.553 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.553 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.554 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.122 nvme0n1 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.122 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.123 11:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.690 nvme0n1 00:26:00.690 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.690 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.690 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.690 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.690 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.690 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.949 nvme0n1 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.949 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.209 nvme0n1 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.209 11:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.468 nvme0n1 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.468 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.469 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.728 nvme0n1 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.728 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.987 nvme0n1 00:26:01.987 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.987 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.987 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.987 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.987 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.987 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.988 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.247 nvme0n1 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.247 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.248 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.248 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.248 11:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.507 nvme0n1 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.507 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.767 nvme0n1 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.767 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.026 nvme0n1 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.026 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.027 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.027 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.027 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.027 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.027 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.027 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.027 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.027 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.027 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.027 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.027 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.286 nvme0n1 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.286 11:37:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.545 nvme0n1 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.545 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.804 nvme0n1 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:03.804 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.805 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.064 nvme0n1 00:26:04.064 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.064 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.064 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.064 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.064 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.064 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.323 11:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.583 nvme0n1 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.583 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.843 nvme0n1 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.843 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.412 nvme0n1 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.412 11:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.670 nvme0n1 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.670 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.929 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.929 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.929 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.188 nvme0n1 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.188 11:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.756 nvme0n1 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.756 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.015 nvme0n1 00:26:07.015 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.015 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.015 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.015 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.015 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.015 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.015 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.015 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.015 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.015 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.275 11:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.844 nvme0n1 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.844 11:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.413 nvme0n1 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:08.413 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.414 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.982 nvme0n1 00:26:08.982 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.982 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.982 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.982 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.982 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.982 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.241 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.242 11:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.811 nvme0n1 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.811 11:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.380 nvme0n1 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.380 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.381 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.381 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.381 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.381 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.381 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.381 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.381 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.641 nvme0n1 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.641 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.642 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.642 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.642 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.902 nvme0n1 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.902 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.903 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.903 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.903 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.163 nvme0n1 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.163 nvme0n1 00:26:11.163 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.422 11:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.422 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.423 nvme0n1 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.423 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.683 nvme0n1 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.683 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.943 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.944 nvme0n1 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.944 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.203 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.204 nvme0n1 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:26:12.204 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.464 11:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.464 nvme0n1 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.464 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:12.465 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.465 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.725 nvme0n1 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.725 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.985 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:12.985 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.985 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.985 nvme0n1 00:26:12.985 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.985 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.985 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.985 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.985 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.245 11:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.505 nvme0n1 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.505 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.765 nvme0n1 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.765 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.025 nvme0n1 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.025 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.285 11:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 nvme0n1 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.545 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.806 nvme0n1 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.806 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.375 nvme0n1 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.376 11:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.376 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.633 nvme0n1 00:26:15.633 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.633 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.633 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.633 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.633 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.891 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.892 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.151 nvme0n1 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.151 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.152 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.152 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.152 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.152 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.152 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.152 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.152 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.152 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.152 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.152 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.152 11:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.720 nvme0n1 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzE5ZDBhZmE3OTZkMzNhMjIzN2M1NjhmZjFkYzAzMjCyf+xf: 00:26:16.720 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: ]] 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDkzMzM4MTI5OTlhMTJjMTZlNmNhYWI3YTExNGUwMzdmYjQzY2NhMWY5YjJlMTYyMDY0ZGQyYTEyMTI0NmI4ZqQ5O3M=: 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.721 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.290 nvme0n1 00:26:17.290 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.290 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.290 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.290 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.290 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.290 11:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.290 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.291 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.860 nvme0n1 00:26:17.860 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.860 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.860 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.860 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.860 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.860 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:18.119 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.120 11:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 nvme0n1 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODg2MzdkMjhiZjcxODM4MWRmOWI1MWQ4NTMxOTU3ZTMxODE0MjljMmZlZTMyYTYw0VKy+A==: 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: ]] 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTczNzljNzIzMGY1ZWU0ZWZlYzM0MWJhYTRhNjkwY2QTeXc8: 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.688 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.256 nvme0n1 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTI2NGI2NDcxMjAxNTcxMDA3MzU5NmFiMzI2ZDgzNTk2MjZjMzY0OTNlZjJjMTYxM2M2MTZlY2JjNTZhZDkxZmejQYk=: 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.256 11:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.256 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.256 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.256 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.256 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.256 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.824 nvme0n1 00:26:19.824 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.824 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.824 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.824 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.824 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.824 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.084 request: 00:26:20.084 { 00:26:20.084 "name": "nvme0", 00:26:20.084 "trtype": "tcp", 00:26:20.084 "traddr": "10.0.0.1", 00:26:20.084 "adrfam": "ipv4", 00:26:20.084 "trsvcid": "4420", 00:26:20.084 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:20.084 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:20.084 "prchk_reftag": false, 00:26:20.084 "prchk_guard": false, 00:26:20.084 "hdgst": false, 00:26:20.084 "ddgst": false, 00:26:20.084 "allow_unrecognized_csi": false, 00:26:20.084 "method": "bdev_nvme_attach_controller", 00:26:20.084 "req_id": 1 00:26:20.084 } 00:26:20.084 Got JSON-RPC error response 00:26:20.084 response: 00:26:20.084 { 00:26:20.084 "code": -5, 00:26:20.084 "message": "Input/output error" 00:26:20.084 } 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.084 request: 00:26:20.084 { 00:26:20.084 "name": "nvme0", 00:26:20.084 "trtype": "tcp", 00:26:20.084 "traddr": "10.0.0.1", 00:26:20.084 "adrfam": "ipv4", 00:26:20.084 "trsvcid": "4420", 00:26:20.084 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:20.084 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:20.084 "prchk_reftag": false, 00:26:20.084 "prchk_guard": false, 00:26:20.084 "hdgst": false, 00:26:20.084 "ddgst": false, 00:26:20.084 "dhchap_key": "key2", 00:26:20.084 "allow_unrecognized_csi": false, 00:26:20.084 "method": "bdev_nvme_attach_controller", 00:26:20.084 "req_id": 1 00:26:20.084 } 00:26:20.084 Got JSON-RPC error response 00:26:20.084 response: 00:26:20.084 { 00:26:20.084 "code": -5, 00:26:20.084 "message": "Input/output error" 00:26:20.084 } 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.084 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.344 request: 00:26:20.344 { 00:26:20.344 "name": "nvme0", 00:26:20.344 "trtype": "tcp", 00:26:20.344 "traddr": "10.0.0.1", 00:26:20.344 "adrfam": "ipv4", 00:26:20.344 "trsvcid": "4420", 00:26:20.344 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:20.344 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:20.344 "prchk_reftag": false, 00:26:20.344 "prchk_guard": false, 00:26:20.344 "hdgst": false, 00:26:20.344 "ddgst": false, 00:26:20.344 "dhchap_key": "key1", 00:26:20.344 "dhchap_ctrlr_key": "ckey2", 00:26:20.344 "allow_unrecognized_csi": false, 00:26:20.344 "method": "bdev_nvme_attach_controller", 00:26:20.344 "req_id": 1 00:26:20.344 } 00:26:20.344 Got JSON-RPC error response 00:26:20.344 response: 00:26:20.344 { 00:26:20.344 "code": -5, 00:26:20.344 "message": "Input/output error" 00:26:20.344 } 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.344 11:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.344 nvme0n1 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:20.344 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.345 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.345 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.603 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.603 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.603 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:20.603 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.603 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.603 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.603 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.604 request: 00:26:20.604 { 00:26:20.604 "name": "nvme0", 00:26:20.604 "dhchap_key": "key1", 00:26:20.604 "dhchap_ctrlr_key": "ckey2", 00:26:20.604 "method": "bdev_nvme_set_keys", 00:26:20.604 "req_id": 1 00:26:20.604 } 00:26:20.604 Got JSON-RPC error response 00:26:20.604 response: 00:26:20.604 { 00:26:20.604 "code": -13, 00:26:20.604 "message": "Permission denied" 00:26:20.604 } 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:20.604 11:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:21.982 11:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.982 11:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:21.982 11:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.982 11:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.982 11:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.982 11:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:21.982 11:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.919 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiNTczMjdhODU1YjNiOTQ3ZjczYjA2ZmI1MzUwZWM0YzNkNjQwOTg0ZTg3ODUx3QKHGw==: 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: ]] 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYWZmOWJjMDAyYTE2NGE1NGJmNGNiYzExZmZjMzRjNDQyZWJmYzg1N2NhMzg2DPKheg==: 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.920 nvme0n1 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODliMTU3ZGRlMjgxNzUzNTkzNDEwZjY3ZGUwNTUzN2Gm6RiE: 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: ]] 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDZlOTllOWU0NWYxYmU0NzFhMTRkZmYzY2IwNjlhNTKsCWmq: 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.920 request: 00:26:22.920 { 00:26:22.920 "name": "nvme0", 00:26:22.920 "dhchap_key": "key2", 00:26:22.920 "dhchap_ctrlr_key": "ckey1", 00:26:22.920 "method": "bdev_nvme_set_keys", 00:26:22.920 "req_id": 1 00:26:22.920 } 00:26:22.920 Got JSON-RPC error response 00:26:22.920 response: 00:26:22.920 { 00:26:22.920 "code": -13, 00:26:22.920 "message": "Permission denied" 00:26:22.920 } 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:22.920 11:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:24.298 rmmod nvme_tcp 00:26:24.298 rmmod nvme_fabrics 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2393255 ']' 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2393255 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2393255 ']' 00:26:24.298 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2393255 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2393255 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2393255' 00:26:24.299 killing process with pid 2393255 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2393255 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2393255 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.299 11:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:26.835 11:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:29.375 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:29.375 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:30.314 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:30.314 11:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.CH6 /tmp/spdk.key-null.NFr /tmp/spdk.key-sha256.63E /tmp/spdk.key-sha384.dHZ /tmp/spdk.key-sha512.rwe /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:30.314 11:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:33.605 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:33.605 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:33.605 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:33.605 00:26:33.605 real 0m54.086s 00:26:33.605 user 0m48.900s 00:26:33.605 sys 0m12.604s 00:26:33.605 11:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.605 11:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.605 ************************************ 00:26:33.605 END TEST nvmf_auth_host 00:26:33.605 ************************************ 00:26:33.605 11:37:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:33.605 11:37:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:33.605 11:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:33.605 11:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.605 11:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.605 ************************************ 00:26:33.605 START TEST nvmf_digest 00:26:33.605 ************************************ 00:26:33.605 11:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:33.605 * Looking for test storage... 00:26:33.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.605 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:33.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.606 --rc genhtml_branch_coverage=1 00:26:33.606 --rc genhtml_function_coverage=1 00:26:33.606 --rc genhtml_legend=1 00:26:33.606 --rc geninfo_all_blocks=1 00:26:33.606 --rc geninfo_unexecuted_blocks=1 00:26:33.606 00:26:33.606 ' 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:33.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.606 --rc genhtml_branch_coverage=1 00:26:33.606 --rc genhtml_function_coverage=1 00:26:33.606 --rc genhtml_legend=1 00:26:33.606 --rc geninfo_all_blocks=1 00:26:33.606 --rc geninfo_unexecuted_blocks=1 00:26:33.606 00:26:33.606 ' 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:33.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.606 --rc genhtml_branch_coverage=1 00:26:33.606 --rc genhtml_function_coverage=1 00:26:33.606 --rc genhtml_legend=1 00:26:33.606 --rc geninfo_all_blocks=1 00:26:33.606 --rc geninfo_unexecuted_blocks=1 00:26:33.606 00:26:33.606 ' 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:33.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.606 --rc genhtml_branch_coverage=1 00:26:33.606 --rc genhtml_function_coverage=1 00:26:33.606 --rc genhtml_legend=1 00:26:33.606 --rc geninfo_all_blocks=1 00:26:33.606 --rc geninfo_unexecuted_blocks=1 00:26:33.606 00:26:33.606 ' 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.606 11:37:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.181 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:40.182 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:40.182 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:40.182 Found net devices under 0000:86:00.0: cvl_0_0 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:40.182 Found net devices under 0000:86:00.1: cvl_0_1 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:40.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:26:40.182 00:26:40.182 --- 10.0.0.2 ping statistics --- 00:26:40.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.182 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:26:40.182 00:26:40.182 --- 10.0.0.1 ping statistics --- 00:26:40.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.182 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:40.182 11:37:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.182 ************************************ 00:26:40.182 START TEST nvmf_digest_clean 00:26:40.182 ************************************ 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2407013 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2407013 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2407013 ']' 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.182 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.183 [2024-11-19 11:37:53.071336] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:40.183 [2024-11-19 11:37:53.071377] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.183 [2024-11-19 11:37:53.150589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.183 [2024-11-19 11:37:53.192545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.183 [2024-11-19 11:37:53.192577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.183 [2024-11-19 11:37:53.192585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.183 [2024-11-19 11:37:53.192591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.183 [2024-11-19 11:37:53.192596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.183 [2024-11-19 11:37:53.193030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.183 null0 00:26:40.183 [2024-11-19 11:37:53.348628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.183 [2024-11-19 11:37:53.372815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2407042 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2407042 /var/tmp/bperf.sock 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2407042 ']' 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:40.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.183 [2024-11-19 11:37:53.426285] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:40.183 [2024-11-19 11:37:53.426326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407042 ] 00:26:40.183 [2024-11-19 11:37:53.501365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.183 [2024-11-19 11:37:53.542205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.183 11:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.442 nvme0n1 00:26:40.442 11:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:40.442 11:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.442 Running I/O for 2 seconds... 00:26:42.464 24614.00 IOPS, 96.15 MiB/s [2024-11-19T10:37:56.245Z] 24663.50 IOPS, 96.34 MiB/s 00:26:42.464 Latency(us) 00:26:42.464 [2024-11-19T10:37:56.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.464 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:42.464 nvme0n1 : 2.01 24664.74 96.35 0.00 0.00 5185.04 2293.76 11226.60 00:26:42.464 [2024-11-19T10:37:56.245Z] =================================================================================================================== 00:26:42.464 [2024-11-19T10:37:56.245Z] Total : 24664.74 96.35 0.00 0.00 5185.04 2293.76 11226.60 00:26:42.464 { 00:26:42.464 "results": [ 00:26:42.464 { 00:26:42.464 "job": "nvme0n1", 00:26:42.464 "core_mask": "0x2", 00:26:42.464 "workload": "randread", 00:26:42.464 "status": "finished", 00:26:42.464 "queue_depth": 128, 00:26:42.464 "io_size": 4096, 00:26:42.464 "runtime": 2.005089, 00:26:42.464 "iops": 24664.740567625675, 00:26:42.464 "mibps": 96.34664284228779, 00:26:42.464 "io_failed": 0, 00:26:42.464 "io_timeout": 0, 00:26:42.464 "avg_latency_us": 5185.040899895821, 00:26:42.464 "min_latency_us": 2293.76, 00:26:42.464 "max_latency_us": 11226.601739130434 00:26:42.464 } 00:26:42.464 ], 00:26:42.464 "core_count": 1 00:26:42.464 } 00:26:42.464 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:42.464 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:42.464 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:42.464 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:42.464 | select(.opcode=="crc32c") 00:26:42.464 | "\(.module_name) \(.executed)"' 00:26:42.464 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:42.724 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:42.724 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:42.724 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:42.724 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:42.724 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2407042 00:26:42.724 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2407042 ']' 00:26:42.724 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2407042 00:26:42.724 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:42.724 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.724 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2407042 00:26:42.724 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2407042' 00:26:42.983 killing process with pid 2407042 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2407042 00:26:42.983 Received shutdown signal, test time was about 2.000000 seconds 00:26:42.983 00:26:42.983 Latency(us) 00:26:42.983 [2024-11-19T10:37:56.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.983 [2024-11-19T10:37:56.764Z] =================================================================================================================== 00:26:42.983 [2024-11-19T10:37:56.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2407042 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2407521 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2407521 /var/tmp/bperf.sock 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2407521 ']' 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:42.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.983 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:42.983 [2024-11-19 11:37:56.702827] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:42.983 [2024-11-19 11:37:56.702876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407521 ] 00:26:42.983 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:42.983 Zero copy mechanism will not be used. 00:26:43.242 [2024-11-19 11:37:56.777734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.242 [2024-11-19 11:37:56.820284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.242 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.242 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:43.242 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:43.242 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:43.242 11:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:43.501 11:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.501 11:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.760 nvme0n1 00:26:43.760 11:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:43.760 11:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:44.020 Zero copy mechanism will not be used. 00:26:44.020 Running I/O for 2 seconds... 00:26:45.914 5488.00 IOPS, 686.00 MiB/s [2024-11-19T10:37:59.695Z] 5672.00 IOPS, 709.00 MiB/s 00:26:45.914 Latency(us) 00:26:45.914 [2024-11-19T10:37:59.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.914 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:45.914 nvme0n1 : 2.00 5671.83 708.98 0.00 0.00 2818.28 616.18 6126.19 00:26:45.914 [2024-11-19T10:37:59.695Z] =================================================================================================================== 00:26:45.914 [2024-11-19T10:37:59.695Z] Total : 5671.83 708.98 0.00 0.00 2818.28 616.18 6126.19 00:26:45.914 { 00:26:45.914 "results": [ 00:26:45.914 { 00:26:45.914 "job": "nvme0n1", 00:26:45.914 "core_mask": "0x2", 00:26:45.914 "workload": "randread", 00:26:45.914 "status": "finished", 00:26:45.914 "queue_depth": 16, 00:26:45.914 "io_size": 131072, 00:26:45.914 "runtime": 2.00288, 00:26:45.914 "iops": 5671.832561111999, 00:26:45.914 "mibps": 708.9790701389999, 00:26:45.914 "io_failed": 0, 00:26:45.914 "io_timeout": 0, 00:26:45.914 "avg_latency_us": 2818.275781996326, 00:26:45.914 "min_latency_us": 616.1808695652173, 00:26:45.914 "max_latency_us": 6126.191304347826 00:26:45.914 } 00:26:45.914 ], 00:26:45.914 "core_count": 1 00:26:45.914 } 00:26:45.914 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:45.914 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:45.914 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:45.914 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:45.914 | select(.opcode=="crc32c") 00:26:45.914 | "\(.module_name) \(.executed)"' 00:26:45.914 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2407521 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2407521 ']' 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2407521 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2407521 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2407521' 00:26:46.174 killing process with pid 2407521 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2407521 00:26:46.174 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.174 00:26:46.174 Latency(us) 00:26:46.174 [2024-11-19T10:37:59.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.174 [2024-11-19T10:37:59.955Z] =================================================================================================================== 00:26:46.174 [2024-11-19T10:37:59.955Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.174 11:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2407521 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2408158 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2408158 /var/tmp/bperf.sock 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2408158 ']' 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:46.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.435 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.435 [2024-11-19 11:38:00.127893] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:46.435 [2024-11-19 11:38:00.127946] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408158 ] 00:26:46.435 [2024-11-19 11:38:00.205716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.694 [2024-11-19 11:38:00.248794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.694 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.694 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:46.694 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:46.694 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:46.694 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:46.954 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.954 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.214 nvme0n1 00:26:47.214 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:47.214 11:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.214 Running I/O for 2 seconds... 00:26:49.530 27680.00 IOPS, 108.12 MiB/s [2024-11-19T10:38:03.311Z] 27843.50 IOPS, 108.76 MiB/s 00:26:49.530 Latency(us) 00:26:49.530 [2024-11-19T10:38:03.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.530 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:49.530 nvme0n1 : 2.01 27859.67 108.83 0.00 0.00 4588.44 1809.36 8947.09 00:26:49.530 [2024-11-19T10:38:03.311Z] =================================================================================================================== 00:26:49.530 [2024-11-19T10:38:03.311Z] Total : 27859.67 108.83 0.00 0.00 4588.44 1809.36 8947.09 00:26:49.530 { 00:26:49.530 "results": [ 00:26:49.530 { 00:26:49.530 "job": "nvme0n1", 00:26:49.530 "core_mask": "0x2", 00:26:49.530 "workload": "randwrite", 00:26:49.530 "status": "finished", 00:26:49.530 "queue_depth": 128, 00:26:49.530 "io_size": 4096, 00:26:49.530 "runtime": 2.005731, 00:26:49.530 "iops": 27859.668120999275, 00:26:49.530 "mibps": 108.82682859765342, 00:26:49.530 "io_failed": 0, 00:26:49.530 "io_timeout": 0, 00:26:49.530 "avg_latency_us": 4588.440663576656, 00:26:49.530 "min_latency_us": 1809.3634782608697, 00:26:49.530 "max_latency_us": 8947.088695652174 00:26:49.530 } 00:26:49.530 ], 00:26:49.530 "core_count": 1 00:26:49.530 } 00:26:49.530 11:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:49.530 11:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:49.530 11:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:49.530 11:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:49.530 | select(.opcode=="crc32c") 00:26:49.530 | "\(.module_name) \(.executed)"' 00:26:49.530 11:38:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2408158 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2408158 ']' 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2408158 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2408158 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2408158' 00:26:49.530 killing process with pid 2408158 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2408158 00:26:49.530 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.530 00:26:49.530 Latency(us) 00:26:49.530 [2024-11-19T10:38:03.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.530 [2024-11-19T10:38:03.311Z] =================================================================================================================== 00:26:49.530 [2024-11-19T10:38:03.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.530 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2408158 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2408680 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2408680 /var/tmp/bperf.sock 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2408680 ']' 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.790 [2024-11-19 11:38:03.396912] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:49.790 [2024-11-19 11:38:03.396967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408680 ] 00:26:49.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.790 Zero copy mechanism will not be used. 00:26:49.790 [2024-11-19 11:38:03.472876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.790 [2024-11-19 11:38:03.515197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:49.790 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:50.050 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.050 11:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.310 nvme0n1 00:26:50.570 11:38:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:50.570 11:38:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:50.570 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:50.570 Zero copy mechanism will not be used. 00:26:50.570 Running I/O for 2 seconds... 00:26:52.444 6098.00 IOPS, 762.25 MiB/s [2024-11-19T10:38:06.225Z] 6281.00 IOPS, 785.12 MiB/s 00:26:52.444 Latency(us) 00:26:52.444 [2024-11-19T10:38:06.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.444 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:52.444 nvme0n1 : 2.00 6281.55 785.19 0.00 0.00 2543.11 1374.83 5584.81 00:26:52.444 [2024-11-19T10:38:06.225Z] =================================================================================================================== 00:26:52.444 [2024-11-19T10:38:06.225Z] Total : 6281.55 785.19 0.00 0.00 2543.11 1374.83 5584.81 00:26:52.444 { 00:26:52.444 "results": [ 00:26:52.444 { 00:26:52.444 "job": "nvme0n1", 00:26:52.444 "core_mask": "0x2", 00:26:52.444 "workload": "randwrite", 00:26:52.444 "status": "finished", 00:26:52.444 "queue_depth": 16, 00:26:52.444 "io_size": 131072, 00:26:52.444 "runtime": 2.003168, 00:26:52.444 "iops": 6281.550024760779, 00:26:52.444 "mibps": 785.1937530950973, 00:26:52.444 "io_failed": 0, 00:26:52.444 "io_timeout": 0, 00:26:52.444 "avg_latency_us": 2543.109195913051, 00:26:52.444 "min_latency_us": 1374.831304347826, 00:26:52.444 "max_latency_us": 5584.806956521739 00:26:52.444 } 00:26:52.444 ], 00:26:52.444 "core_count": 1 00:26:52.444 } 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:52.703 | select(.opcode=="crc32c") 00:26:52.703 | "\(.module_name) \(.executed)"' 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2408680 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2408680 ']' 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2408680 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:52.703 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.704 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2408680 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2408680' 00:26:52.963 killing process with pid 2408680 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2408680 00:26:52.963 Received shutdown signal, test time was about 2.000000 seconds 00:26:52.963 00:26:52.963 Latency(us) 00:26:52.963 [2024-11-19T10:38:06.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.963 [2024-11-19T10:38:06.744Z] =================================================================================================================== 00:26:52.963 [2024-11-19T10:38:06.744Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2408680 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2407013 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2407013 ']' 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2407013 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2407013 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2407013' 00:26:52.963 killing process with pid 2407013 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2407013 00:26:52.963 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2407013 00:26:53.222 00:26:53.222 real 0m13.834s 00:26:53.222 user 0m26.511s 00:26:53.222 sys 0m4.570s 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:53.222 ************************************ 00:26:53.222 END TEST nvmf_digest_clean 00:26:53.222 ************************************ 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:53.222 ************************************ 00:26:53.222 START TEST nvmf_digest_error 00:26:53.222 ************************************ 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2409190 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2409190 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2409190 ']' 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.222 11:38:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.222 [2024-11-19 11:38:06.981963] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:53.222 [2024-11-19 11:38:06.982011] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.481 [2024-11-19 11:38:07.062907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.481 [2024-11-19 11:38:07.103492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.481 [2024-11-19 11:38:07.103530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.481 [2024-11-19 11:38:07.103537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.481 [2024-11-19 11:38:07.103543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.481 [2024-11-19 11:38:07.103549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.481 [2024-11-19 11:38:07.104136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.481 [2024-11-19 11:38:07.168572] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.481 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.481 null0 00:26:53.741 [2024-11-19 11:38:07.259282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.741 [2024-11-19 11:38:07.283475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2409392 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2409392 /var/tmp/bperf.sock 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2409392 ']' 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:53.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.741 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.741 [2024-11-19 11:38:07.333963] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:53.742 [2024-11-19 11:38:07.334022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409392 ] 00:26:53.742 [2024-11-19 11:38:07.407957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.742 [2024-11-19 11:38:07.450725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.002 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.002 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:54.002 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:54.002 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:54.003 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:54.003 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.003 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.003 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.003 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.003 11:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.571 nvme0n1 00:26:54.571 11:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:54.571 11:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.571 11:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.571 11:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.571 11:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:54.571 11:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:54.571 Running I/O for 2 seconds... 00:26:54.571 [2024-11-19 11:38:08.265850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.571 [2024-11-19 11:38:08.265884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.571 [2024-11-19 11:38:08.265894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.571 [2024-11-19 11:38:08.277814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.571 [2024-11-19 11:38:08.277838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.571 [2024-11-19 11:38:08.277847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.571 [2024-11-19 11:38:08.288886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.571 [2024-11-19 11:38:08.288908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.571 [2024-11-19 11:38:08.288917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.571 [2024-11-19 11:38:08.297770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.572 [2024-11-19 11:38:08.297792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.572 [2024-11-19 11:38:08.297800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.572 [2024-11-19 11:38:08.311504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.572 [2024-11-19 11:38:08.311526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.572 [2024-11-19 11:38:08.311534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.572 [2024-11-19 11:38:08.323129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.572 [2024-11-19 11:38:08.323150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.572 [2024-11-19 11:38:08.323158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.572 [2024-11-19 11:38:08.336415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.572 [2024-11-19 11:38:08.336437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.572 [2024-11-19 11:38:08.336445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.572 [2024-11-19 11:38:08.344509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.572 [2024-11-19 11:38:08.344531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.572 [2024-11-19 11:38:08.344543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.356295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.356319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.356329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.368268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.368290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.368298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.377246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.377269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.377277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.386943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.386971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.386980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.395505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.395527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.395535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.405552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.405574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.405582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.416931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.416957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.416966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.425518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.425540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.425548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.437307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.437332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.437341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.447493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.447514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.447523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.455965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.455986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.455994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.468139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.468161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.468170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.477031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.477053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.477061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.488956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.488977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.488985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.497910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.497931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.497939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.510633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.510654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.510662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.522142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.522164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.522172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.833 [2024-11-19 11:38:08.534839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.833 [2024-11-19 11:38:08.534861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.833 [2024-11-19 11:38:08.534869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.834 [2024-11-19 11:38:08.545205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.834 [2024-11-19 11:38:08.545226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.834 [2024-11-19 11:38:08.545234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.834 [2024-11-19 11:38:08.553460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.834 [2024-11-19 11:38:08.553482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.834 [2024-11-19 11:38:08.553490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.834 [2024-11-19 11:38:08.565363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.834 [2024-11-19 11:38:08.565384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.834 [2024-11-19 11:38:08.565392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.834 [2024-11-19 11:38:08.575989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.834 [2024-11-19 11:38:08.576010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.834 [2024-11-19 11:38:08.576018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.834 [2024-11-19 11:38:08.586336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.834 [2024-11-19 11:38:08.586359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.834 [2024-11-19 11:38:08.586367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.834 [2024-11-19 11:38:08.596929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.834 [2024-11-19 11:38:08.596955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.834 [2024-11-19 11:38:08.596964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.834 [2024-11-19 11:38:08.605765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:54.834 [2024-11-19 11:38:08.605787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.834 [2024-11-19 11:38:08.605795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.615983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.616005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.616017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.624355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.624375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.624383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.635039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.635061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.635069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.644841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.644862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.644871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.654174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.654197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.654206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.667075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.667098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.667106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.675363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.675385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.675394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.687475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.687496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.687505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.695331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.695353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.695362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.705848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.705869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.705878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.715404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.715425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.715433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.725681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.725702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.725710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.734194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.734216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.734224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.744240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.744263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.744271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.755078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.755099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.755108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.765220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.765240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.765249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.773831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.773852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.773860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.095 [2024-11-19 11:38:08.787220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.095 [2024-11-19 11:38:08.787242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.095 [2024-11-19 11:38:08.787254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.096 [2024-11-19 11:38:08.795349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.096 [2024-11-19 11:38:08.795370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.096 [2024-11-19 11:38:08.795379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.096 [2024-11-19 11:38:08.806872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.096 [2024-11-19 11:38:08.806893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.096 [2024-11-19 11:38:08.806901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.096 [2024-11-19 11:38:08.819566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.096 [2024-11-19 11:38:08.819587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.096 [2024-11-19 11:38:08.819595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.096 [2024-11-19 11:38:08.828161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.096 [2024-11-19 11:38:08.828181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.096 [2024-11-19 11:38:08.828189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.096 [2024-11-19 11:38:08.840174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.096 [2024-11-19 11:38:08.840195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.096 [2024-11-19 11:38:08.840204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.096 [2024-11-19 11:38:08.853110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.096 [2024-11-19 11:38:08.853133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.096 [2024-11-19 11:38:08.853142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.096 [2024-11-19 11:38:08.861280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.096 [2024-11-19 11:38:08.861301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.096 [2024-11-19 11:38:08.861309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.357 [2024-11-19 11:38:08.873321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.357 [2024-11-19 11:38:08.873343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.357 [2024-11-19 11:38:08.873351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.357 [2024-11-19 11:38:08.884365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.357 [2024-11-19 11:38:08.884389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.357 [2024-11-19 11:38:08.884397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.357 [2024-11-19 11:38:08.892882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.357 [2024-11-19 11:38:08.892903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.357 [2024-11-19 11:38:08.892912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.357 [2024-11-19 11:38:08.903731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.357 [2024-11-19 11:38:08.903751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.357 [2024-11-19 11:38:08.903760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.357 [2024-11-19 11:38:08.913810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.357 [2024-11-19 11:38:08.913831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.357 [2024-11-19 11:38:08.913839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.357 [2024-11-19 11:38:08.922571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.357 [2024-11-19 11:38:08.922592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.357 [2024-11-19 11:38:08.922600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.357 [2024-11-19 11:38:08.933035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.357 [2024-11-19 11:38:08.933056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.357 [2024-11-19 11:38:08.933065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.357 [2024-11-19 11:38:08.941933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.357 [2024-11-19 11:38:08.941959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.357 [2024-11-19 11:38:08.941968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.357 [2024-11-19 11:38:08.952043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.357 [2024-11-19 11:38:08.952064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.357 [2024-11-19 11:38:08.952073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:08.960898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:08.960918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:08.960926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:08.970399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:08.970420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:08.970428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:08.980783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:08.980803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:08.980811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:08.989979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:08.989999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:08.990008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:08.999190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:08.999211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:08.999220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.011355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.011376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.011385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.023029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.023050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.023058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.031972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.031992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.032000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.041881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.041902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.041910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.051196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.051217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.051228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.060683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.060704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.060712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.070782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.070804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.070813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.080206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.080227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.080236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.089578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.089600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.089608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.099280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.099301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.099310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.112546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.112568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.112576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.122977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.122999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.123007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.358 [2024-11-19 11:38:09.131581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.358 [2024-11-19 11:38:09.131602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.358 [2024-11-19 11:38:09.131611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.143447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.143474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.143482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.152409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.152430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.152438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.163172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.163192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.163200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.173966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.173986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.173995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.183278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.183299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.183308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.192630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.192651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.192659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.202749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.202769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.202777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.210972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.210993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.211001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.221144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.221163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.221175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.229883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.229904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.229912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.239635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.239656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.239664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 24656.00 IOPS, 96.31 MiB/s [2024-11-19T10:38:09.401Z] [2024-11-19 11:38:09.249531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.249554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.249562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.261706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.261727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.261735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.274424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.274444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.274453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.284875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.284899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.284907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.293680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.293701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.293709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.305648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.305669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.305678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.318835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.318859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.318868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.329694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.329714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.329723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.337832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.337853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.337861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.348076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.348096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.348104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.620 [2024-11-19 11:38:09.357997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.620 [2024-11-19 11:38:09.358018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.620 [2024-11-19 11:38:09.358026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.621 [2024-11-19 11:38:09.367271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.621 [2024-11-19 11:38:09.367292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.621 [2024-11-19 11:38:09.367300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.621 [2024-11-19 11:38:09.376595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.621 [2024-11-19 11:38:09.376616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.621 [2024-11-19 11:38:09.376624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.621 [2024-11-19 11:38:09.385918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.621 [2024-11-19 11:38:09.385939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.621 [2024-11-19 11:38:09.385952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.621 [2024-11-19 11:38:09.395934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.621 [2024-11-19 11:38:09.395960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.621 [2024-11-19 11:38:09.395969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.407794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.407816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.407824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.420401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.420422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.420430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.432266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.432287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.432295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.444912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.444932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.444940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.453915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.453936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.453944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.464530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.464551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.464559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.473970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.473991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.473999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.484561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.484581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.484589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.493614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.493635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.493647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.502094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.502115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.502123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.513067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.513088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.513096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.522492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.522512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.522521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.531158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.531179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.531187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.541834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.541855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.541864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.887 [2024-11-19 11:38:09.553669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.887 [2024-11-19 11:38:09.553690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.887 [2024-11-19 11:38:09.553698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.888 [2024-11-19 11:38:09.562605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.888 [2024-11-19 11:38:09.562625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.888 [2024-11-19 11:38:09.562634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.888 [2024-11-19 11:38:09.574169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.888 [2024-11-19 11:38:09.574190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.888 [2024-11-19 11:38:09.574198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.888 [2024-11-19 11:38:09.583757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.888 [2024-11-19 11:38:09.583778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.888 [2024-11-19 11:38:09.583786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.888 [2024-11-19 11:38:09.592659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.888 [2024-11-19 11:38:09.592680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.888 [2024-11-19 11:38:09.592688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.888 [2024-11-19 11:38:09.604217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.888 [2024-11-19 11:38:09.604239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.888 [2024-11-19 11:38:09.604247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.888 [2024-11-19 11:38:09.613065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.888 [2024-11-19 11:38:09.613086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.888 [2024-11-19 11:38:09.613095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.888 [2024-11-19 11:38:09.622706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.888 [2024-11-19 11:38:09.622727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.888 [2024-11-19 11:38:09.622736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.888 [2024-11-19 11:38:09.632879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.888 [2024-11-19 11:38:09.632901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.888 [2024-11-19 11:38:09.632909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.888 [2024-11-19 11:38:09.641946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.888 [2024-11-19 11:38:09.641972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.888 [2024-11-19 11:38:09.641980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.888 [2024-11-19 11:38:09.652434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.888 [2024-11-19 11:38:09.652455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.888 [2024-11-19 11:38:09.652464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.888 [2024-11-19 11:38:09.660801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:55.888 [2024-11-19 11:38:09.660821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.888 [2024-11-19 11:38:09.660833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.672648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.672669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.672678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.682419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.682439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.682448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.690729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.690749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.690757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.701995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.702016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.702024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.712555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.712576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.712585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.721373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.721394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.721402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.731898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.731920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.731928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.740551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.740572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.740581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.750905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.750930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.750938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.760928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.760954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.760964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.769176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.769197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.769205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.778743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.778764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.778772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.788042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.788064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.788072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.798810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.798831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.798840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.810084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.810106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.810115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.818984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.819006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.819015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.830145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.830167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.830175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.840543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.840566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.840574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.149 [2024-11-19 11:38:09.850259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.149 [2024-11-19 11:38:09.850280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.149 [2024-11-19 11:38:09.850289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.150 [2024-11-19 11:38:09.859725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.150 [2024-11-19 11:38:09.859747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.150 [2024-11-19 11:38:09.859755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.150 [2024-11-19 11:38:09.869057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.150 [2024-11-19 11:38:09.869078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.150 [2024-11-19 11:38:09.869086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.150 [2024-11-19 11:38:09.877553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.150 [2024-11-19 11:38:09.877575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.150 [2024-11-19 11:38:09.877583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.150 [2024-11-19 11:38:09.888439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.150 [2024-11-19 11:38:09.888460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.150 [2024-11-19 11:38:09.888469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.150 [2024-11-19 11:38:09.896999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.150 [2024-11-19 11:38:09.897020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.150 [2024-11-19 11:38:09.897028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.150 [2024-11-19 11:38:09.909323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.150 [2024-11-19 11:38:09.909344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.150 [2024-11-19 11:38:09.909353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.150 [2024-11-19 11:38:09.918216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.150 [2024-11-19 11:38:09.918237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.150 [2024-11-19 11:38:09.918250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:09.929681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:09.929703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:09.929711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:09.940000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:09.940021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:09.940030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:09.952136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:09.952157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:09.952165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:09.960477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:09.960497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:09.960505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:09.970769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:09.970791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:09.970799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:09.982660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:09.982681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:09.982690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:09.991894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:09.991914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:09.991922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.002608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.002628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.002637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.013609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.013663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.013688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.023198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.023223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.023232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.032498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.032521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.032530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.043785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.043809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.043818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.052341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.052362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.052370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.063417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.063438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.063447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.075347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.075368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.075377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.084137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.084186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.084223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.098765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.098788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.098797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.111730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.111752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.111761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.120137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.120158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.120167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.131108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.131130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.131139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.140855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.140878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.140886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.152152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.152174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.152182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.160715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.160736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.160744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.172543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.172566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.172575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.412 [2024-11-19 11:38:10.183935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.412 [2024-11-19 11:38:10.183963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.412 [2024-11-19 11:38:10.183972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.673 [2024-11-19 11:38:10.192262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.673 [2024-11-19 11:38:10.192289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.673 [2024-11-19 11:38:10.192298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.673 [2024-11-19 11:38:10.204640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.673 [2024-11-19 11:38:10.204663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.673 [2024-11-19 11:38:10.204671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.673 [2024-11-19 11:38:10.216945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.673 [2024-11-19 11:38:10.216972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.673 [2024-11-19 11:38:10.216981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.673 [2024-11-19 11:38:10.226060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.673 [2024-11-19 11:38:10.226082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.673 [2024-11-19 11:38:10.226090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.673 [2024-11-19 11:38:10.239424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.673 [2024-11-19 11:38:10.239446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.673 [2024-11-19 11:38:10.239454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.673 24711.00 IOPS, 96.53 MiB/s [2024-11-19T10:38:10.454Z] [2024-11-19 11:38:10.252211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157d370) 00:26:56.674 [2024-11-19 11:38:10.252234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.674 [2024-11-19 11:38:10.252243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.674 00:26:56.674 Latency(us) 00:26:56.674 [2024-11-19T10:38:10.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.674 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:56.674 nvme0n1 : 2.00 24718.91 96.56 0.00 0.00 5172.71 2649.93 19375.86 00:26:56.674 [2024-11-19T10:38:10.455Z] =================================================================================================================== 00:26:56.674 [2024-11-19T10:38:10.455Z] Total : 24718.91 96.56 0.00 0.00 5172.71 2649.93 19375.86 00:26:56.674 { 00:26:56.674 "results": [ 00:26:56.674 { 00:26:56.674 "job": "nvme0n1", 00:26:56.674 "core_mask": "0x2", 00:26:56.674 "workload": "randread", 00:26:56.674 "status": "finished", 00:26:56.674 "queue_depth": 128, 00:26:56.674 "io_size": 4096, 00:26:56.674 "runtime": 2.004538, 00:26:56.674 "iops": 24718.912786886554, 00:26:56.674 "mibps": 96.5582530737756, 00:26:56.674 "io_failed": 0, 00:26:56.674 "io_timeout": 0, 00:26:56.674 "avg_latency_us": 5172.710791979994, 00:26:56.674 "min_latency_us": 2649.9339130434782, 00:26:56.674 "max_latency_us": 19375.86086956522 00:26:56.674 } 00:26:56.674 ], 00:26:56.674 "core_count": 1 00:26:56.674 } 00:26:56.674 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:56.674 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:56.674 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:56.674 | .driver_specific 00:26:56.674 | .nvme_error 00:26:56.674 | .status_code 00:26:56.674 | .command_transient_transport_error' 00:26:56.674 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2409392 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2409392 ']' 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2409392 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2409392 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2409392' 00:26:56.934 killing process with pid 2409392 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2409392 00:26:56.934 Received shutdown signal, test time was about 2.000000 seconds 00:26:56.934 00:26:56.934 Latency(us) 00:26:56.934 [2024-11-19T10:38:10.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.934 [2024-11-19T10:38:10.715Z] =================================================================================================================== 00:26:56.934 [2024-11-19T10:38:10.715Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2409392 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2409894 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2409894 /var/tmp/bperf.sock 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2409894 ']' 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.934 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.193 [2024-11-19 11:38:10.738762] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:57.194 [2024-11-19 11:38:10.738808] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409894 ] 00:26:57.194 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:57.194 Zero copy mechanism will not be used. 00:26:57.194 [2024-11-19 11:38:10.815627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.194 [2024-11-19 11:38:10.853038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.194 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.194 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:57.194 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.194 11:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.452 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:57.452 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.452 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.452 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.453 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.453 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.022 nvme0n1 00:26:58.022 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:58.022 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.022 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.022 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.022 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:58.022 11:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.022 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.022 Zero copy mechanism will not be used. 00:26:58.022 Running I/O for 2 seconds... 00:26:58.022 [2024-11-19 11:38:11.742244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.742283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.742294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.022 [2024-11-19 11:38:11.746749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.746777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.746787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.022 [2024-11-19 11:38:11.751192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.751215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.751225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.022 [2024-11-19 11:38:11.756215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.756240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.756249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.022 [2024-11-19 11:38:11.761298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.761323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.761332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.022 [2024-11-19 11:38:11.766007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.766032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.766040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.022 [2024-11-19 11:38:11.771514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.771537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.771546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.022 [2024-11-19 11:38:11.776636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.776659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.776668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.022 [2024-11-19 11:38:11.780227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.780250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.780258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.022 [2024-11-19 11:38:11.785164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.785188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.785196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.022 [2024-11-19 11:38:11.789712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.789735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.789747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.022 [2024-11-19 11:38:11.795025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.022 [2024-11-19 11:38:11.795049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.022 [2024-11-19 11:38:11.795058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.283 [2024-11-19 11:38:11.799475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.283 [2024-11-19 11:38:11.799501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.283 [2024-11-19 11:38:11.799511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.283 [2024-11-19 11:38:11.804925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.283 [2024-11-19 11:38:11.804955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.283 [2024-11-19 11:38:11.804965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.283 [2024-11-19 11:38:11.811075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.283 [2024-11-19 11:38:11.811099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.283 [2024-11-19 11:38:11.811119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.283 [2024-11-19 11:38:11.816876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.283 [2024-11-19 11:38:11.816899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.283 [2024-11-19 11:38:11.816909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.283 [2024-11-19 11:38:11.822071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.283 [2024-11-19 11:38:11.822094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.283 [2024-11-19 11:38:11.822103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.283 [2024-11-19 11:38:11.827799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.283 [2024-11-19 11:38:11.827822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.283 [2024-11-19 11:38:11.827831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.283 [2024-11-19 11:38:11.833870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.283 [2024-11-19 11:38:11.833893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.283 [2024-11-19 11:38:11.833902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.283 [2024-11-19 11:38:11.839903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.283 [2024-11-19 11:38:11.839933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.283 [2024-11-19 11:38:11.839942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.283 [2024-11-19 11:38:11.846745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.283 [2024-11-19 11:38:11.846769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.283 [2024-11-19 11:38:11.846777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.283 [2024-11-19 11:38:11.852977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.283 [2024-11-19 11:38:11.853000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.283 [2024-11-19 11:38:11.853008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.283 [2024-11-19 11:38:11.860391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.283 [2024-11-19 11:38:11.860414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.860423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.867729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.867752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.867761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.875643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.875667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.875676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.882993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.883017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.883026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.890234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.890257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.890267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.897581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.897604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.897613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.905065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.905088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.905097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.912534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.912557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.912566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.920036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.920060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.920069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.927718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.927741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.927750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.935058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.935082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.935091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.942710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.942734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.942742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.950806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.950830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.950838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.958488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.958511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.958520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.965468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.965499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.965509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.971057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.971080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.971088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.975597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.975622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.975630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.980080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.980102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.980110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.984540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.984562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.984570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.988973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.988996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.989005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.993476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.993499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.993507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:11.998699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:11.998722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:11.998731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:12.004302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:12.004327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:12.004336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:12.010382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:12.010406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:12.010414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:12.015056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:12.015078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:12.015086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:12.019547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:12.019569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:12.019578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:12.024133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:12.024156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.284 [2024-11-19 11:38:12.024165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.284 [2024-11-19 11:38:12.028650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.284 [2024-11-19 11:38:12.028672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.285 [2024-11-19 11:38:12.028681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.285 [2024-11-19 11:38:12.033824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.285 [2024-11-19 11:38:12.033849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.285 [2024-11-19 11:38:12.033857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.285 [2024-11-19 11:38:12.039236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.285 [2024-11-19 11:38:12.039259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.285 [2024-11-19 11:38:12.039268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.285 [2024-11-19 11:38:12.044034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.285 [2024-11-19 11:38:12.044057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.285 [2024-11-19 11:38:12.044065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.285 [2024-11-19 11:38:12.048632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.285 [2024-11-19 11:38:12.048655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.285 [2024-11-19 11:38:12.048667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.285 [2024-11-19 11:38:12.053305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.285 [2024-11-19 11:38:12.053327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.285 [2024-11-19 11:38:12.053336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.285 [2024-11-19 11:38:12.058308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.285 [2024-11-19 11:38:12.058332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.285 [2024-11-19 11:38:12.058341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.064616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.064641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.064650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.070429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.070453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.070462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.075960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.075983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.075991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.081223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.081246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.081255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.086911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.086933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.086941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.092599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.092622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.092631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.097483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.097509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.097517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.102069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.102092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.102101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.106629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.106652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.106660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.111241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.111263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.111272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.115838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.115860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.115869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.120414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.120436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.120444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.125144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.125167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.125175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.130691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.130714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.130723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.136900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.136924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.136933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.143149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.143172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.143180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.149454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.149478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.149487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.156473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.156496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.156505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.164136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.164159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.164168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.171649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.171672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.171681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.179385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.179408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.179417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.187297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.187319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.187328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.193617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.546 [2024-11-19 11:38:12.193641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.546 [2024-11-19 11:38:12.193649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.546 [2024-11-19 11:38:12.198798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.198821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.198834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.203393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.203416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.203424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.207915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.207937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.207946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.212512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.212534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.212543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.217082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.217105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.217114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.221683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.221706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.221714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.226239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.226260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.226268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.230800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.230822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.230830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.235414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.235436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.235445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.239983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.240005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.240013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.244531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.244553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.244561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.249548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.249571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.249581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.254890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.254914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.254923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.261556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.261580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.261588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.267449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.267471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.267480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.272953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.272991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.273000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.278458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.278481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.278489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.283545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.283568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.283580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.288190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.288212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.288221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.292647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.292667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.292675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.297144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.297167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.297175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.301598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.301620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.301628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.306697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.306719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.306727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.313557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.313580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.313589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.547 [2024-11-19 11:38:12.321079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.547 [2024-11-19 11:38:12.321103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.547 [2024-11-19 11:38:12.321112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.808 [2024-11-19 11:38:12.328200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.808 [2024-11-19 11:38:12.328225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-11-19 11:38:12.328233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.808 [2024-11-19 11:38:12.336160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.808 [2024-11-19 11:38:12.336188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-11-19 11:38:12.336196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.808 [2024-11-19 11:38:12.342235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.808 [2024-11-19 11:38:12.342260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-11-19 11:38:12.342269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.808 [2024-11-19 11:38:12.349791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.808 [2024-11-19 11:38:12.349815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-11-19 11:38:12.349824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.808 [2024-11-19 11:38:12.357362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.808 [2024-11-19 11:38:12.357384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-11-19 11:38:12.357393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.808 [2024-11-19 11:38:12.365043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.808 [2024-11-19 11:38:12.365067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-11-19 11:38:12.365076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.808 [2024-11-19 11:38:12.373029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.808 [2024-11-19 11:38:12.373053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-11-19 11:38:12.373062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.808 [2024-11-19 11:38:12.380650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.808 [2024-11-19 11:38:12.380673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.808 [2024-11-19 11:38:12.380682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.387976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.388000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.388009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.395549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.395572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.395581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.403041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.403064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.403073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.410923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.410946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.410961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.415396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.415418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.415427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.422582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.422606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.422614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.430243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.430267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.430275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.438043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.438066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.438075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.445474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.445497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.445506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.453677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.453699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.453709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.461546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.461570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.461583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.469188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.469211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.469220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.476864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.476887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.476896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.484750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.484774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.484782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.492472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.492495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.492503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.500898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.500920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.500929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.508794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.508818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.508827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.516928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.516957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.516966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.524592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.524615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.524624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.532427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.532453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.532462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.539718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.539741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.539749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.547432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.547453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.547462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.553066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.553088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.553096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.557797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.557819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.557827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.562577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.562598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.562607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.567601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.567623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.567631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.809 [2024-11-19 11:38:12.572902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.809 [2024-11-19 11:38:12.572925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.809 [2024-11-19 11:38:12.572933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.810 [2024-11-19 11:38:12.578558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.810 [2024-11-19 11:38:12.578581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-11-19 11:38:12.578589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.810 [2024-11-19 11:38:12.584959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:58.810 [2024-11-19 11:38:12.584998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.810 [2024-11-19 11:38:12.585007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.590929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.590958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.590967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.596649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.596672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.596680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.602304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.602326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.602334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.608257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.608281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.608290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.614453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.614476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.614485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.619572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.619594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.619602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.624327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.624349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.624357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.629134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.629156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.629169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.634078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.634100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.634109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.639890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.639913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.639922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.647095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.647119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.647128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.652724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.652747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.652756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.658228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.658251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.658259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.663618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.663642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.663650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.670177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.670202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.670211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.677582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.677607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.677616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.684154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.684178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.071 [2024-11-19 11:38:12.684187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.071 [2024-11-19 11:38:12.689918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.071 [2024-11-19 11:38:12.689942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.689957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.697309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.697335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.697344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.705340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.705364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.705373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.712219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.712242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.712251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.719136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.719159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.719168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.726122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.726145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.726154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.732264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.732287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.732296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.072 5098.00 IOPS, 637.25 MiB/s [2024-11-19T10:38:12.853Z] [2024-11-19 11:38:12.739976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.739999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.740014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.744816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.744839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.744849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.749332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.749355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.749363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.753974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.754002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.754010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.758771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.758794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.758803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.763421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.763445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.763453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.768117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.768139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.768147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.772728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.772751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.772759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.777335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.777357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.777365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.781987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.782013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.782022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.786618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.786641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.786650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.791183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.791206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.791214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.795675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.795698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.795706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.800230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.800253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.800261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.804772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.804793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.804801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.809336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.809358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.809366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.814569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.814591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.814600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.819494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.819517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.819525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.825007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.825029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.825038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.072 [2024-11-19 11:38:12.831029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.072 [2024-11-19 11:38:12.831052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.072 [2024-11-19 11:38:12.831061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.073 [2024-11-19 11:38:12.836999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.073 [2024-11-19 11:38:12.837023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.073 [2024-11-19 11:38:12.837031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.073 [2024-11-19 11:38:12.841000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.073 [2024-11-19 11:38:12.841022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.073 [2024-11-19 11:38:12.841030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.073 [2024-11-19 11:38:12.846191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.073 [2024-11-19 11:38:12.846215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.073 [2024-11-19 11:38:12.846223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.852211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.852235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.852244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.857728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.857752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.857761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.863150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.863172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.863181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.868165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.868188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.868201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.873109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.873132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.873141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.878397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.878419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.878427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.883777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.883800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.883808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.889178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.889200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.889208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.894917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.894940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.894955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.900475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.900498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.900506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.906173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.906196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.906205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.912199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.912222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.912231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.917830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.917851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.917860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.923453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.923476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.923484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.929148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.929171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.334 [2024-11-19 11:38:12.929179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.334 [2024-11-19 11:38:12.934830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.334 [2024-11-19 11:38:12.934853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:12.934861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:12.942445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:12.942469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:12.942477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:12.948417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:12.948439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:12.948447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:12.953806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:12.953828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:12.953837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:12.960595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:12.960619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:12.960628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:12.969062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:12.969085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:12.969098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:12.975737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:12.975763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:12.975772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:12.982035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:12.982058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:12.982067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:12.988409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:12.988432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:12.988440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:12.994983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:12.995005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:12.995013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.000419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.000441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.000450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.006179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.006202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.006210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.011600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.011622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.011630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.017007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.017029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.017037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.022119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.022146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.022154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.027373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.027396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.027404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.032727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.032750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.032758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.038041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.038064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.038073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.043346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.043368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.043376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.048772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.048795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.048803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.054196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.054218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.054226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.059775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.059797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.059805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.065279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.065303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.065311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.070943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.070971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.070980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.076416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.076438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.076446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.081865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.081886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.081894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.087514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.087536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.335 [2024-11-19 11:38:13.087545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.335 [2024-11-19 11:38:13.092916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.335 [2024-11-19 11:38:13.092939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.336 [2024-11-19 11:38:13.092956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.336 [2024-11-19 11:38:13.098383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.336 [2024-11-19 11:38:13.098405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.336 [2024-11-19 11:38:13.098414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.336 [2024-11-19 11:38:13.103836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.336 [2024-11-19 11:38:13.103858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.336 [2024-11-19 11:38:13.103866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.336 [2024-11-19 11:38:13.109249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.336 [2024-11-19 11:38:13.109270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.336 [2024-11-19 11:38:13.109279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.112274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.112297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.112310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.117823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.117855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.117863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.124163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.124183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.124191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.128786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.128808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.128816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.134320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.134341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.134350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.139877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.139898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.139906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.144870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.144893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.144901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.150292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.150313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.150321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.155731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.155752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.155760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.160912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.160938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.160946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.166912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.166934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.166942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.172589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.172611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.597 [2024-11-19 11:38:13.172620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.597 [2024-11-19 11:38:13.178088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.597 [2024-11-19 11:38:13.178110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.178119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.183721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.183742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.183751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.189246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.189268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.189276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.194754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.194776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.194785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.200361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.200383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.200391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.205886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.205908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.205917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.211337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.211359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.211367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.217213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.217236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.217244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.222547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.222569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.222577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.228545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.228567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.228576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.234015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.234036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.234044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.239400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.239422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.239430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.244919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.244941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.244955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.250425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.250447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.250455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.256154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.256176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.256188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.261462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.261484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.261493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.266760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.266783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.266791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.272114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.272136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.272145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.277615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.277638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.277646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.283149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.283171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.283179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.288749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.288772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.288780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.294219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.294242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.294251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.299901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.299924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.299932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.305646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.305668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.305676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.311888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.311912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.311920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.317602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.317624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.317633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.323352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.323375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.598 [2024-11-19 11:38:13.323383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.598 [2024-11-19 11:38:13.329366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.598 [2024-11-19 11:38:13.329389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.599 [2024-11-19 11:38:13.329396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.599 [2024-11-19 11:38:13.334870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.599 [2024-11-19 11:38:13.334892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.599 [2024-11-19 11:38:13.334900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.599 [2024-11-19 11:38:13.340343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.599 [2024-11-19 11:38:13.340365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.599 [2024-11-19 11:38:13.340373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.599 [2024-11-19 11:38:13.346020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.599 [2024-11-19 11:38:13.346042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.599 [2024-11-19 11:38:13.346050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.599 [2024-11-19 11:38:13.351595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.599 [2024-11-19 11:38:13.351617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.599 [2024-11-19 11:38:13.351629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.599 [2024-11-19 11:38:13.357284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.599 [2024-11-19 11:38:13.357307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.599 [2024-11-19 11:38:13.357315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.599 [2024-11-19 11:38:13.362775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.599 [2024-11-19 11:38:13.362799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.599 [2024-11-19 11:38:13.362807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.599 [2024-11-19 11:38:13.368425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.599 [2024-11-19 11:38:13.368447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.599 [2024-11-19 11:38:13.368456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.374209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.374235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.860 [2024-11-19 11:38:13.374243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.380051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.380075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.860 [2024-11-19 11:38:13.380084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.385849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.385871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.860 [2024-11-19 11:38:13.385880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.391319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.391343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.860 [2024-11-19 11:38:13.391351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.397117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.397139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.860 [2024-11-19 11:38:13.397147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.402875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.402902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.860 [2024-11-19 11:38:13.402910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.408405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.408428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.860 [2024-11-19 11:38:13.408436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.413810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.413833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.860 [2024-11-19 11:38:13.413841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.419248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.419270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.860 [2024-11-19 11:38:13.419278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.424577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.424599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.860 [2024-11-19 11:38:13.424607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.429823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.429846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.860 [2024-11-19 11:38:13.429853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.860 [2024-11-19 11:38:13.435082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.860 [2024-11-19 11:38:13.435104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.435112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.440391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.440413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.440423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.445704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.445726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.445734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.449229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.449249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.449257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.453988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.454010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.454018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.459341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.459363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.459371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.464621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.464643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.464651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.469942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.469970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.469978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.475302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.475324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.475332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.480605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.480626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.480634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.486039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.486060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.486068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.491359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.491380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.491392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.496616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.496638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.496646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.501939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.501967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.501975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.507221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.507242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.507251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.512095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.512117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.512125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.517374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.517396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.517404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.522598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.522619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.522627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.527980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.528001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.528009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.533277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.533299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.533307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.538591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.538617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.538627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.543674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.543696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.543704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.548880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.548902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.548910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.554069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.554091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.554099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.559288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.559310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.559318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.564706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.564729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.564737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.570040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.570062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.861 [2024-11-19 11:38:13.570070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.861 [2024-11-19 11:38:13.575375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.861 [2024-11-19 11:38:13.575398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.575406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.862 [2024-11-19 11:38:13.580582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.862 [2024-11-19 11:38:13.580603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.580611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.862 [2024-11-19 11:38:13.585830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.862 [2024-11-19 11:38:13.585852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.585860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.862 [2024-11-19 11:38:13.591097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.862 [2024-11-19 11:38:13.591119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.591128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.862 [2024-11-19 11:38:13.596329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.862 [2024-11-19 11:38:13.596351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.596358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.862 [2024-11-19 11:38:13.601535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.862 [2024-11-19 11:38:13.601556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.601564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.862 [2024-11-19 11:38:13.606756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.862 [2024-11-19 11:38:13.606777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.606785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.862 [2024-11-19 11:38:13.612004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.862 [2024-11-19 11:38:13.612025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.612034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.862 [2024-11-19 11:38:13.617302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.862 [2024-11-19 11:38:13.617324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.617331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.862 [2024-11-19 11:38:13.622616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.862 [2024-11-19 11:38:13.622637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.622645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.862 [2024-11-19 11:38:13.628118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.862 [2024-11-19 11:38:13.628145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.628154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.862 [2024-11-19 11:38:13.633331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:26:59.862 [2024-11-19 11:38:13.633353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.862 [2024-11-19 11:38:13.633361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.122 [2024-11-19 11:38:13.638614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.122 [2024-11-19 11:38:13.638638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.122 [2024-11-19 11:38:13.638646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.122 [2024-11-19 11:38:13.643852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.643873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.643882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.649099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.649121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.649129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.654327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.654349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.654357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.659654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.659675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.659683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.664473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.664496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.664504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.669790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.669812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.669820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.675123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.675145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.675153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.680373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.680395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.680403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.683785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.683806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.683814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.687894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.687916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.687924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.693088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.693109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.693117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.698247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.698269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.698277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.703519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.703541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.703549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.708766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.708788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.708796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.714069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.714091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.714102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.719393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.719417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.719425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.724057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.724079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.724088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.729368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.729391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.729399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.123 [2024-11-19 11:38:13.734664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d0580) 00:27:00.123 [2024-11-19 11:38:13.734686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.123 [2024-11-19 11:38:13.734695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.123 5428.50 IOPS, 678.56 MiB/s 00:27:00.123 Latency(us) 00:27:00.123 [2024-11-19T10:38:13.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.123 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:00.123 nvme0n1 : 2.00 5428.24 678.53 0.00 0.00 2944.91 658.92 13905.03 00:27:00.123 [2024-11-19T10:38:13.904Z] =================================================================================================================== 00:27:00.123 [2024-11-19T10:38:13.904Z] Total : 5428.24 678.53 0.00 0.00 2944.91 658.92 13905.03 00:27:00.123 { 00:27:00.123 "results": [ 00:27:00.123 { 00:27:00.123 "job": "nvme0n1", 00:27:00.123 "core_mask": "0x2", 00:27:00.123 "workload": "randread", 00:27:00.123 "status": "finished", 00:27:00.123 "queue_depth": 16, 00:27:00.123 "io_size": 131072, 00:27:00.123 "runtime": 2.003045, 00:27:00.123 "iops": 5428.235511433842, 00:27:00.123 "mibps": 678.5294389292302, 00:27:00.123 "io_failed": 0, 00:27:00.123 "io_timeout": 0, 00:27:00.123 "avg_latency_us": 2944.905400933305, 00:27:00.123 "min_latency_us": 658.9217391304347, 00:27:00.123 "max_latency_us": 13905.029565217392 00:27:00.123 } 00:27:00.123 ], 00:27:00.123 "core_count": 1 00:27:00.123 } 00:27:00.123 11:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:00.123 11:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:00.123 11:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:00.123 | .driver_specific 00:27:00.123 | .nvme_error 00:27:00.123 | .status_code 00:27:00.123 | .command_transient_transport_error' 00:27:00.123 11:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:00.383 11:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 351 > 0 )) 00:27:00.383 11:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2409894 00:27:00.383 11:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2409894 ']' 00:27:00.383 11:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2409894 00:27:00.384 11:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:00.384 11:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.384 11:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2409894 00:27:00.384 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:00.384 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:00.384 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2409894' 00:27:00.384 killing process with pid 2409894 00:27:00.384 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2409894 00:27:00.384 Received shutdown signal, test time was about 2.000000 seconds 00:27:00.384 00:27:00.384 Latency(us) 00:27:00.384 [2024-11-19T10:38:14.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.384 [2024-11-19T10:38:14.165Z] =================================================================================================================== 00:27:00.384 [2024-11-19T10:38:14.165Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.384 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2409894 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2410406 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2410406 /var/tmp/bperf.sock 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2410406 ']' 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:00.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.644 [2024-11-19 11:38:14.214426] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:00.644 [2024-11-19 11:38:14.214475] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410406 ] 00:27:00.644 [2024-11-19 11:38:14.288280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.644 [2024-11-19 11:38:14.331101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:00.644 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:00.904 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:00.904 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.904 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.904 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.904 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.904 11:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.474 nvme0n1 00:27:01.474 11:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:01.474 11:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.474 11:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.474 11:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.474 11:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:01.474 11:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:01.474 Running I/O for 2 seconds... 00:27:01.474 [2024-11-19 11:38:15.156828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.474 [2024-11-19 11:38:15.157001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.474 [2024-11-19 11:38:15.157027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.474 [2024-11-19 11:38:15.166538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.474 [2024-11-19 11:38:15.166691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.474 [2024-11-19 11:38:15.166711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.474 [2024-11-19 11:38:15.176270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.474 [2024-11-19 11:38:15.176420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.474 [2024-11-19 11:38:15.176440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.474 [2024-11-19 11:38:15.186009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.474 [2024-11-19 11:38:15.186161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.474 [2024-11-19 11:38:15.186184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.474 [2024-11-19 11:38:15.195688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.474 [2024-11-19 11:38:15.195837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.474 [2024-11-19 11:38:15.195855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.474 [2024-11-19 11:38:15.205580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.474 [2024-11-19 11:38:15.205728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.474 [2024-11-19 11:38:15.205746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.474 [2024-11-19 11:38:15.215288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.474 [2024-11-19 11:38:15.215441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.474 [2024-11-19 11:38:15.215459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.474 [2024-11-19 11:38:15.224936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.474 [2024-11-19 11:38:15.225089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.474 [2024-11-19 11:38:15.225107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.474 [2024-11-19 11:38:15.234614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.474 [2024-11-19 11:38:15.234762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.474 [2024-11-19 11:38:15.234780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.474 [2024-11-19 11:38:15.244279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.474 [2024-11-19 11:38:15.244432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.474 [2024-11-19 11:38:15.244451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.254225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.254374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.254392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.263936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.264092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.264110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.273552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.273703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.273722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.283261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.283407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.283425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.292914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.293069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.293086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.302530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.302676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.302694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.312206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.312353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.312371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.321898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.322055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.322073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.331519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.331669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.331687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.341202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.341351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.341368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.350786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.350935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.350957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.360418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.360567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.360585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.370062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.370211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.370228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.379666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.379814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.379832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.389347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.389494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.389512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.399041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.399193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.399211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.408674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.408822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.408856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.418573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.418721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.418739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.428372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.428518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.428536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.438131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.438277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.438302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.735 [2024-11-19 11:38:15.447907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.735 [2024-11-19 11:38:15.448064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.735 [2024-11-19 11:38:15.448082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.736 [2024-11-19 11:38:15.457598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.736 [2024-11-19 11:38:15.457744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.736 [2024-11-19 11:38:15.457762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.736 [2024-11-19 11:38:15.467244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.736 [2024-11-19 11:38:15.467394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.736 [2024-11-19 11:38:15.467412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.736 [2024-11-19 11:38:15.476866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.736 [2024-11-19 11:38:15.477020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.736 [2024-11-19 11:38:15.477038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.736 [2024-11-19 11:38:15.486485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.736 [2024-11-19 11:38:15.486634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.736 [2024-11-19 11:38:15.486653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.736 [2024-11-19 11:38:15.496182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.736 [2024-11-19 11:38:15.496330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.736 [2024-11-19 11:38:15.496348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.736 [2024-11-19 11:38:15.505785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.736 [2024-11-19 11:38:15.505931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.736 [2024-11-19 11:38:15.505953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.515724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.515876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.515895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.525423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.525574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.525592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.535087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.535235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.535253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.544740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.544888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.544907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.554385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.554534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.554552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.564006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.564156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.564174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.573960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.574121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.574139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.583655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.583802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.583820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.593309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.593461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.593479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.603005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.603154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.603171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.612631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.612781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.612799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.622348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.622500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.622518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.632303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.632452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.632471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.641902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.642060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.642078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.651626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.651777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.651794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.661218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.661402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.661421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.671201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.671356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.671374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.681049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.681198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.681216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.690816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.690969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.690990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.700478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.997 [2024-11-19 11:38:15.700624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.997 [2024-11-19 11:38:15.700642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.997 [2024-11-19 11:38:15.710196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.998 [2024-11-19 11:38:15.710347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.998 [2024-11-19 11:38:15.710364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.998 [2024-11-19 11:38:15.719869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.998 [2024-11-19 11:38:15.720028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.998 [2024-11-19 11:38:15.720046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.998 [2024-11-19 11:38:15.729711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.998 [2024-11-19 11:38:15.729863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.998 [2024-11-19 11:38:15.729881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.998 [2024-11-19 11:38:15.739340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.998 [2024-11-19 11:38:15.739492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.998 [2024-11-19 11:38:15.739510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.998 [2024-11-19 11:38:15.748986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.998 [2024-11-19 11:38:15.749137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.998 [2024-11-19 11:38:15.749155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.998 [2024-11-19 11:38:15.758637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.998 [2024-11-19 11:38:15.758788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.998 [2024-11-19 11:38:15.758806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:01.998 [2024-11-19 11:38:15.768263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:01.998 [2024-11-19 11:38:15.768413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.998 [2024-11-19 11:38:15.768430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.778073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.778221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.778239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.787982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.788135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.788154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.797848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.798006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.798025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.807818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.807972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.807992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.817687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.817838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.817857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.827631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.827782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.827800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.837568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.837719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.837737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.847466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.847614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.847631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.857435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.857586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.857608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.867364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.867516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.867534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.877272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.877423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.877441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.887254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.887407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.259 [2024-11-19 11:38:15.887425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.259 [2024-11-19 11:38:15.897138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.259 [2024-11-19 11:38:15.897290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:15.897309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:15.907048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:15.907199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:15.907218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:15.917002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:15.917156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:15.917175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:15.926904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:15.927066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:15.927085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:15.936861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:15.937022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:15.937040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:15.946775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:15.946933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:15.946958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:15.956506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:15.956652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:15.956670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:15.966186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:15.966332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:15.966351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:15.975799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:15.975944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:15.975969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:15.985428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:15.985576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:15.985593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:15.995246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:15.995393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:15.995411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:16.004837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:16.004993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:16.005011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:16.014502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:16.014648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:16.014666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:16.024210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:16.024356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:16.024374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.260 [2024-11-19 11:38:16.033921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.260 [2024-11-19 11:38:16.034073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.260 [2024-11-19 11:38:16.034091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.043812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.043963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.043981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.053471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.053617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.053634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.063116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.063263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.063280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.072775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.072920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.072939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.082364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.082511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.082529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.092045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.092194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.092212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.101660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.101809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.101826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.111317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.111465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.111485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.121032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.121182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.121200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.130688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.130835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.130852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.140315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.140460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.140478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 26120.00 IOPS, 102.03 MiB/s [2024-11-19T10:38:16.302Z] [2024-11-19 11:38:16.149966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.150115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.150134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.159587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.159735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.159752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.169256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.169422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.169442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.179229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.179378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.179395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.188899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.189055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.189073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.198554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.521 [2024-11-19 11:38:16.198708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.521 [2024-11-19 11:38:16.198725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.521 [2024-11-19 11:38:16.208243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.522 [2024-11-19 11:38:16.208389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.522 [2024-11-19 11:38:16.208407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.522 [2024-11-19 11:38:16.218006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.522 [2024-11-19 11:38:16.218155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.522 [2024-11-19 11:38:16.218173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.522 [2024-11-19 11:38:16.227679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.522 [2024-11-19 11:38:16.227826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.522 [2024-11-19 11:38:16.227844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.522 [2024-11-19 11:38:16.237306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.522 [2024-11-19 11:38:16.237452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.522 [2024-11-19 11:38:16.237470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.522 [2024-11-19 11:38:16.247028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.522 [2024-11-19 11:38:16.247175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.522 [2024-11-19 11:38:16.247193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.522 [2024-11-19 11:38:16.256669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.522 [2024-11-19 11:38:16.256816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.522 [2024-11-19 11:38:16.256834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.522 [2024-11-19 11:38:16.266285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.522 [2024-11-19 11:38:16.266431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.522 [2024-11-19 11:38:16.266449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.522 [2024-11-19 11:38:16.275965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.522 [2024-11-19 11:38:16.276112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.522 [2024-11-19 11:38:16.276129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.522 [2024-11-19 11:38:16.285596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.522 [2024-11-19 11:38:16.285744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.522 [2024-11-19 11:38:16.285761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.522 [2024-11-19 11:38:16.295350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.522 [2024-11-19 11:38:16.295497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.522 [2024-11-19 11:38:16.295514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.305162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.305308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.305326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.314820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.314969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.314987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.324555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.324701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.324719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.334196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.334344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.334361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.343780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.343926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.343944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.353471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.353617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.353634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.363074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.363221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.363242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.372699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.372847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.372864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.382367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.382516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.382534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.392009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.392156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.392173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.401670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.401817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.401834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.782 [2024-11-19 11:38:16.411383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.782 [2024-11-19 11:38:16.411530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.782 [2024-11-19 11:38:16.411548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.421063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.421211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.421229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.431035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.431185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.431204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.440816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.440961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.440995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.450580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.450730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.450748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.460260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.460406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.460424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.469857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.470012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.470029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.479551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.479696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.479714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.489203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.489351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.489369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.498840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.498994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.499012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.508493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.508640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.508658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.518225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.518373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.518391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.527826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.527978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.527996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.537484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.537629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.537647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.547110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.547256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.547274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:02.783 [2024-11-19 11:38:16.556776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:02.783 [2024-11-19 11:38:16.556920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.783 [2024-11-19 11:38:16.556938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.044 [2024-11-19 11:38:16.566632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.044 [2024-11-19 11:38:16.566782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.044 [2024-11-19 11:38:16.566801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.044 [2024-11-19 11:38:16.576220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.044 [2024-11-19 11:38:16.576370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.044 [2024-11-19 11:38:16.576388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.044 [2024-11-19 11:38:16.585902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.044 [2024-11-19 11:38:16.586056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.044 [2024-11-19 11:38:16.586074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.044 [2024-11-19 11:38:16.595547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.044 [2024-11-19 11:38:16.595698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.044 [2024-11-19 11:38:16.595716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.605160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.605308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.605326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.614878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.615035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.615057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.624626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.624786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.624804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.634443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.634594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.634613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.644170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.644317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.644334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.653777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.653926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.653944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.663439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.663586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.663604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.673048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.673195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.673229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.682939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.683095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.683112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.692817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.692967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.692985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.702582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.702733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.702751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.712314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.712463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.712481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.722084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.722247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.722265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.731819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.731967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.731985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.741533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.741680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.741697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.751381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.751541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.751559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.760977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.761126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.761144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.770634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.770781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.770799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.780244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.780390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.780408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.789883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.790043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.790061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.799543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.799694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.799712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.809173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.809322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.809340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.045 [2024-11-19 11:38:16.818989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.045 [2024-11-19 11:38:16.819140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.045 [2024-11-19 11:38:16.819158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.306 [2024-11-19 11:38:16.828822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.306 [2024-11-19 11:38:16.828972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.306 [2024-11-19 11:38:16.828990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.306 [2024-11-19 11:38:16.838413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.306 [2024-11-19 11:38:16.838560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.306 [2024-11-19 11:38:16.838578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.306 [2024-11-19 11:38:16.848117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.306 [2024-11-19 11:38:16.848265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.306 [2024-11-19 11:38:16.848283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.306 [2024-11-19 11:38:16.857713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.306 [2024-11-19 11:38:16.857862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.306 [2024-11-19 11:38:16.857880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.306 [2024-11-19 11:38:16.867327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.306 [2024-11-19 11:38:16.867475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.306 [2024-11-19 11:38:16.867496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.306 [2024-11-19 11:38:16.876967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.306 [2024-11-19 11:38:16.877115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.306 [2024-11-19 11:38:16.877132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.306 [2024-11-19 11:38:16.886613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.306 [2024-11-19 11:38:16.886762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.306 [2024-11-19 11:38:16.886779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.306 [2024-11-19 11:38:16.896258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.306 [2024-11-19 11:38:16.896406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.306 [2024-11-19 11:38:16.896424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.306 [2024-11-19 11:38:16.905916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:16.906070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:16.906089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:16.915566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:16.915716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:16.915733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:16.925267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:16.925414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:16.925432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:16.935117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:16.935266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:16.935283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:16.944881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:16.945035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:16.945053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:16.954641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:16.954793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:16.954812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:16.964292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:16.964439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:16.964456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:16.973955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:16.974107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:16.974125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:16.983583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:16.983730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:16.983747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:16.993195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:16.993343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:16.993361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:17.002984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:17.003134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:17.003151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:17.012700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:17.012852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:17.012870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:17.022595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:17.022747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:17.022766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:17.032408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:17.032555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:17.032573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:17.042024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:17.042174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:17.042192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:17.051712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:17.051863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:17.051881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:17.061360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:17.061506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:17.061524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:17.070976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:17.071125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:17.071142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.307 [2024-11-19 11:38:17.080735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.307 [2024-11-19 11:38:17.080886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.307 [2024-11-19 11:38:17.080904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.568 [2024-11-19 11:38:17.090627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.568 [2024-11-19 11:38:17.090774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.568 [2024-11-19 11:38:17.090792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.568 [2024-11-19 11:38:17.100248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.568 [2024-11-19 11:38:17.100394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.568 [2024-11-19 11:38:17.100411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.568 [2024-11-19 11:38:17.109909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.568 [2024-11-19 11:38:17.110066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.568 [2024-11-19 11:38:17.110084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.568 [2024-11-19 11:38:17.119619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.568 [2024-11-19 11:38:17.119766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.568 [2024-11-19 11:38:17.119787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.568 [2024-11-19 11:38:17.129279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.568 [2024-11-19 11:38:17.129424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.568 [2024-11-19 11:38:17.129443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.568 [2024-11-19 11:38:17.138921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.568 [2024-11-19 11:38:17.139075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.568 [2024-11-19 11:38:17.139092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.568 [2024-11-19 11:38:17.148530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951640) with pdu=0x2000166fe720 00:27:03.568 [2024-11-19 11:38:17.149354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.568 [2024-11-19 11:38:17.149374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.568 26244.50 IOPS, 102.52 MiB/s 00:27:03.568 Latency(us) 00:27:03.568 [2024-11-19T10:38:17.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.568 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:03.568 nvme0n1 : 2.01 26244.60 102.52 0.00 0.00 4868.73 3618.73 10143.83 00:27:03.568 [2024-11-19T10:38:17.349Z] =================================================================================================================== 00:27:03.569 [2024-11-19T10:38:17.350Z] Total : 26244.60 102.52 0.00 0.00 4868.73 3618.73 10143.83 00:27:03.569 { 00:27:03.569 "results": [ 00:27:03.569 { 00:27:03.569 "job": "nvme0n1", 00:27:03.569 "core_mask": "0x2", 00:27:03.569 "workload": "randwrite", 00:27:03.569 "status": "finished", 00:27:03.569 "queue_depth": 128, 00:27:03.569 "io_size": 4096, 00:27:03.569 "runtime": 2.006051, 00:27:03.569 "iops": 26244.596971861632, 00:27:03.569 "mibps": 102.5179569213345, 00:27:03.569 "io_failed": 0, 00:27:03.569 "io_timeout": 0, 00:27:03.569 "avg_latency_us": 4868.7328617958165, 00:27:03.569 "min_latency_us": 3618.7269565217393, 00:27:03.569 "max_latency_us": 10143.83304347826 00:27:03.569 } 00:27:03.569 ], 00:27:03.569 "core_count": 1 00:27:03.569 } 00:27:03.569 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:03.569 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:03.569 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:03.569 | .driver_specific 00:27:03.569 | .nvme_error 00:27:03.569 | .status_code 00:27:03.569 | .command_transient_transport_error' 00:27:03.569 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 206 > 0 )) 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2410406 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2410406 ']' 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2410406 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2410406 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2410406' 00:27:03.830 killing process with pid 2410406 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2410406 00:27:03.830 Received shutdown signal, test time was about 2.000000 seconds 00:27:03.830 00:27:03.830 Latency(us) 00:27:03.830 [2024-11-19T10:38:17.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.830 [2024-11-19T10:38:17.611Z] =================================================================================================================== 00:27:03.830 [2024-11-19T10:38:17.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2410406 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2411055 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2411055 /var/tmp/bperf.sock 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2411055 ']' 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:03.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:03.830 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.090 [2024-11-19 11:38:17.632248] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:04.090 [2024-11-19 11:38:17.632295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411055 ] 00:27:04.090 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:04.090 Zero copy mechanism will not be used. 00:27:04.090 [2024-11-19 11:38:17.705882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.090 [2024-11-19 11:38:17.746305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.090 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.090 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:04.090 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:04.090 11:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:04.351 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:04.351 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.351 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.351 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.351 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.351 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.611 nvme0n1 00:27:04.611 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:04.611 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.611 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.611 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.611 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:04.611 11:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:04.872 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:04.872 Zero copy mechanism will not be used. 00:27:04.872 Running I/O for 2 seconds... 00:27:04.872 [2024-11-19 11:38:18.424382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.424573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.424602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.431524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.431672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.431696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.438128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.438300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.438324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.444120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.444252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.444272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.450540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.450698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.450719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.456818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.456963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.456983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.461899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.461990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.462010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.467196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.467304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.467323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.472549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.472642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.472661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.478065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.478175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.478195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.483649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.483749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.483768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.489089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.489207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.489225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.494497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.494555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.494580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.499349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.499436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.499455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.504022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.504122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.504140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.509444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.509608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.872 [2024-11-19 11:38:18.509627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.872 [2024-11-19 11:38:18.515897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.872 [2024-11-19 11:38:18.516065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.516084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.521415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.521524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.521544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.527105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.527188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.527207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.532504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.532807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.532828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.537635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.537897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.537918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.542723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.543038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.543059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.548509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.548735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.548756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.554073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.554407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.554427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.560070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.560372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.560394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.566017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.566335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.566356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.571597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.571863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.571883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.577482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.577751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.577772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.584298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.584557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.584577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.590030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.590288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.590309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.595162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.595408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.595429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.601009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.601326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.601346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.607665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.607814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.607833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.612617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.612884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.612905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.617212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.617460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.617481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.621989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.622237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.622257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.627796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.628137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.628159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.633624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.633907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.633928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.638381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.638646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.638675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.643122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.643382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.643404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 11:38:18.647812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:04.873 [2024-11-19 11:38:18.648100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 11:38:18.648122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.652538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.652803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.135 [2024-11-19 11:38:18.652824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.657351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.657619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.135 [2024-11-19 11:38:18.657639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.661995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.662289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.135 [2024-11-19 11:38:18.662310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.667048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.667304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.135 [2024-11-19 11:38:18.667325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.672059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.672370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.135 [2024-11-19 11:38:18.672390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.678232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.678608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.135 [2024-11-19 11:38:18.678628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.683526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.683783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.135 [2024-11-19 11:38:18.683803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.689243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.689498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.135 [2024-11-19 11:38:18.689519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.694053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.694339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.135 [2024-11-19 11:38:18.694359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.700383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.700683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.135 [2024-11-19 11:38:18.700705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.706207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.706461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.135 [2024-11-19 11:38:18.706482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.135 [2024-11-19 11:38:18.711740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.135 [2024-11-19 11:38:18.712007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.712027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.717222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.717481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.717502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.722468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.722718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.722738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.727061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.727322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.727342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.731911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.732163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.732184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.737180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.737424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.737445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.743502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.743612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.743632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.749180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.749482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.749502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.755411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.755710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.755731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.762189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.762442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.762463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.768889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.769136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.769158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.775017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.775265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.775285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.780638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.780923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.780956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.786162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.786454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.786474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.792089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.792351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.792373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.797588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.797835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.797856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.803204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.803520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.803540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.809689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.810031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.810051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.817033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.817321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.817343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.823823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.824072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.824094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.828908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.829166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.829187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.833542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.833810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.833832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.838057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.838315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.838336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.842433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.842681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.842701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.846815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.847079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.847100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.851373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.851627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.851647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.855708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.855968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.136 [2024-11-19 11:38:18.855988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.136 [2024-11-19 11:38:18.860222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.136 [2024-11-19 11:38:18.860479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.137 [2024-11-19 11:38:18.860500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.137 [2024-11-19 11:38:18.864942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.137 [2024-11-19 11:38:18.865202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.137 [2024-11-19 11:38:18.865222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.137 [2024-11-19 11:38:18.870365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.137 [2024-11-19 11:38:18.870604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.137 [2024-11-19 11:38:18.870624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.137 [2024-11-19 11:38:18.875762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.137 [2024-11-19 11:38:18.876059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.137 [2024-11-19 11:38:18.876079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.137 [2024-11-19 11:38:18.880895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.137 [2024-11-19 11:38:18.881137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.137 [2024-11-19 11:38:18.881157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.137 [2024-11-19 11:38:18.885800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.137 [2024-11-19 11:38:18.886054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.137 [2024-11-19 11:38:18.886075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.137 [2024-11-19 11:38:18.890795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.137 [2024-11-19 11:38:18.891054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.137 [2024-11-19 11:38:18.891075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.137 [2024-11-19 11:38:18.895971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.137 [2024-11-19 11:38:18.896222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.137 [2024-11-19 11:38:18.896242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.137 [2024-11-19 11:38:18.900930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.137 [2024-11-19 11:38:18.901173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.137 [2024-11-19 11:38:18.901194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.137 [2024-11-19 11:38:18.905899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.137 [2024-11-19 11:38:18.906144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.137 [2024-11-19 11:38:18.906165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.137 [2024-11-19 11:38:18.911015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.137 [2024-11-19 11:38:18.911267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.137 [2024-11-19 11:38:18.911288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.915778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.916073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.916099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.921350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.921600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.921620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.926453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.926749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.926769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.932423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.932676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.932695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.937615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.937854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.937874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.942635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.942888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.942909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.947869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.948134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.948155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.953025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.953280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.953300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.958152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.958417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.958437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.962840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.963108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.963129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.967633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.967868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.967888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.972815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.973066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.973087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.399 [2024-11-19 11:38:18.978428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.399 [2024-11-19 11:38:18.978728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-19 11:38:18.978749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:18.983945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:18.984202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:18.984222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:18.988639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:18.988894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:18.988914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:18.993162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:18.993413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:18.993433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:18.997475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:18.997732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:18.997753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.001722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.001980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.002000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.006109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.006355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.006375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.010624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.010872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.010892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.015174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.015452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.015472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.019548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.019805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.019825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.023852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.024105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.024124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.028084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.028346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.028366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.032476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.032742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.032762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.036674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.036934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.036961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.040907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.041179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.041203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.045038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.045282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.045302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.049231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.049485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.049505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.053428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.053683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.053704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.057616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.057868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.057888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.061822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.062083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.062104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.066089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.066343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.066363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.070286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.070540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.070560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.074544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.074802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.074822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.078760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.079020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.079040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.082957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.083223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.083243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.087145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.087392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.087411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.091378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.091646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.091666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.400 [2024-11-19 11:38:19.095548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.400 [2024-11-19 11:38:19.095792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.400 [2024-11-19 11:38:19.095812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.099744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.100009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.100028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.103993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.104254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.104274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.108203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.108463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.108483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.112421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.112691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.112710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.116615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.116864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.116884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.120863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.121136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.121156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.125060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.125311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.125331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.129284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.129546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.129565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.133482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.133735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.133755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.137715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.137986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.138006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.141884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.142138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.142158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.146088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.146342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.146362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.150285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.150539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.150563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.154539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.154797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.154816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.158682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.158953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.158973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.163049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.163294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.163314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.168291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.168617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.168638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.401 [2024-11-19 11:38:19.173931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.401 [2024-11-19 11:38:19.174229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.401 [2024-11-19 11:38:19.174249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.179623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.179933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.179960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.185259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.185512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.185532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.191335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.191622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.191642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.197323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.197615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.197637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.203101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.203366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.203386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.209159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.209459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.209479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.215277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.215473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.215491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.219806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.220039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.220060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.224502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.224727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.224747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.229568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.229791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.229812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.234764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.235005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.235025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.239699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.239920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.239940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.244940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.245205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.245225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.249471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.249674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.249694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.253454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.253655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.253675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.257532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.257721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.257740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.261503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.261706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.261724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.265472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.265680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.265700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.269621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.269843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.269862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.273735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.273930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.273953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.277731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.277929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.277957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.281680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.281878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.281906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.285561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.285768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.285788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.289642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.289855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.662 [2024-11-19 11:38:19.289875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.662 [2024-11-19 11:38:19.293678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.662 [2024-11-19 11:38:19.293855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.293873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.298211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.298407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.298434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.302982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.303169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.303188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.307087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.307296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.307317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.311036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.311246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.311266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.315030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.315236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.315256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.318898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.319104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.319124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.322885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.323074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.323093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.327033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.327182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.327201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.331406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.331583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.331603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.336295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.336498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.336516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.340326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.340532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.340553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.344352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.344558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.344577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.348364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.348557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.348575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.352278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.352458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.352478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.356387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.356586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.356606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.360859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.361071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.361089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.365694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.365907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.365926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.370043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.370248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.370269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.374184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.374389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.374409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.378709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.378906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.378925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.383505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.383682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.383701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.387943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.388151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.388175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.392316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.392527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.392547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.396609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.396814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.396834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.400549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.400749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.400768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.663 [2024-11-19 11:38:19.404809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.663 [2024-11-19 11:38:19.405000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.663 [2024-11-19 11:38:19.405019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.664 [2024-11-19 11:38:19.409355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.664 [2024-11-19 11:38:19.409553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.664 [2024-11-19 11:38:19.409571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.664 [2024-11-19 11:38:19.413354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.664 [2024-11-19 11:38:19.413563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.664 [2024-11-19 11:38:19.413583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.664 6226.00 IOPS, 778.25 MiB/s [2024-11-19T10:38:19.445Z] [2024-11-19 11:38:19.418316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.664 [2024-11-19 11:38:19.418431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.664 [2024-11-19 11:38:19.418450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.664 [2024-11-19 11:38:19.422238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.664 [2024-11-19 11:38:19.422401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.664 [2024-11-19 11:38:19.422420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.664 [2024-11-19 11:38:19.426346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.664 [2024-11-19 11:38:19.426519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.664 [2024-11-19 11:38:19.426538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.664 [2024-11-19 11:38:19.430367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.664 [2024-11-19 11:38:19.430538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.664 [2024-11-19 11:38:19.430559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.664 [2024-11-19 11:38:19.434437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.664 [2024-11-19 11:38:19.434614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.664 [2024-11-19 11:38:19.434635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.664 [2024-11-19 11:38:19.438549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.925 [2024-11-19 11:38:19.438743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.925 [2024-11-19 11:38:19.438764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.925 [2024-11-19 11:38:19.442648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.442798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.442817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.446763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.446961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.446980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.450888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.451064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.451082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.455268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.455450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.455471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.459332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.459521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.459540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.463443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.463629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.463650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.467565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.467751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.467770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.471789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.472021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.472041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.477384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.477593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.477613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.482963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.483163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.483183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.488520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.488778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.488798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.495176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.495336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.495355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.500189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.500360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.500378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.504379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.504557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.504579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.508358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.508527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.508545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.512313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.512471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.512489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.516357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.516522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.516542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.520749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.520901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.520919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.525504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.525648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.525666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.529688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.529853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.529872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.533835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.534001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.534020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.537932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.538111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.538130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.542059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.542223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.542243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.546095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.546265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.546286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.550262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.926 [2024-11-19 11:38:19.550428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.926 [2024-11-19 11:38:19.550447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.926 [2024-11-19 11:38:19.554296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.554489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.554509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.558178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.558365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.558385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.562257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.562423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.562443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.567353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.567487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.567504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.572165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.572325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.572343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.576663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.576820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.576838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.582154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.582288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.582307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.587811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.587967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.587987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.591836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.592017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.592036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.595829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.596002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.596020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.599736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.599901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.599920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.603790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.603936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.603959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.607916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.608084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.608103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.611945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.612141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.612160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.616041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.616203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.616226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.620021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.620168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.620188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.623964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.624148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.624170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.628157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.628313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.628332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.632526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.632692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.632713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.637134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.637263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.637282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.641194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.641328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.641346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.645112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.645257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.645275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.649080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.649227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.649246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.653015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.653152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.653171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.656759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.656916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.656934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.660536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.660674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.660693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.664296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.664444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.927 [2024-11-19 11:38:19.664463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.927 [2024-11-19 11:38:19.668075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.927 [2024-11-19 11:38:19.668207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-11-19 11:38:19.668225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.928 [2024-11-19 11:38:19.671937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.928 [2024-11-19 11:38:19.672088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-11-19 11:38:19.672106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.928 [2024-11-19 11:38:19.676991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.928 [2024-11-19 11:38:19.677133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-11-19 11:38:19.677151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.928 [2024-11-19 11:38:19.681853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.928 [2024-11-19 11:38:19.681986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-11-19 11:38:19.682004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.928 [2024-11-19 11:38:19.686532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.928 [2024-11-19 11:38:19.686655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-11-19 11:38:19.686674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.928 [2024-11-19 11:38:19.691267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.928 [2024-11-19 11:38:19.691386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-11-19 11:38:19.691406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.928 [2024-11-19 11:38:19.695807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.928 [2024-11-19 11:38:19.695911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-11-19 11:38:19.695931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.928 [2024-11-19 11:38:19.700461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:05.928 [2024-11-19 11:38:19.700611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.928 [2024-11-19 11:38:19.700630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.189 [2024-11-19 11:38:19.705171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.189 [2024-11-19 11:38:19.705288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.189 [2024-11-19 11:38:19.705308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.189 [2024-11-19 11:38:19.709868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.189 [2024-11-19 11:38:19.710002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.189 [2024-11-19 11:38:19.710021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.189 [2024-11-19 11:38:19.714409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.189 [2024-11-19 11:38:19.714515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.189 [2024-11-19 11:38:19.714534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.189 [2024-11-19 11:38:19.719042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.189 [2024-11-19 11:38:19.719147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.189 [2024-11-19 11:38:19.719170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.189 [2024-11-19 11:38:19.723784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.189 [2024-11-19 11:38:19.723925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.189 [2024-11-19 11:38:19.723944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.189 [2024-11-19 11:38:19.727811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.189 [2024-11-19 11:38:19.727961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.728000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.731800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.731938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.731963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.735766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.735894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.735913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.739704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.739828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.739847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.743681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.743819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.743838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.748237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.748366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.748385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.753539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.753764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.753785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.759121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.759275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.759294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.765408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.765567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.765585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.771917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.772071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.772089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.777058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.777182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.777200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.782915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.783045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.783064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.788136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.788248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.788266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.793051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.793171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.793190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.797057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.797189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.797208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.800988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.801107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.801126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.804809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.804926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.804946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.808785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.808907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.808925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.812793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.812921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.812939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.816707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.816845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.816863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.820505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.820643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.820662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.824353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.824496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.824515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.829053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.829207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.829225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.834503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.834743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.834765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.841685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.841873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.841893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.848153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.848425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.848447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.854265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.854444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.190 [2024-11-19 11:38:19.854467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.190 [2024-11-19 11:38:19.860426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.190 [2024-11-19 11:38:19.860680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.860701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.866968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.867221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.867243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.873600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.873850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.873872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.879731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.879922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.879942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.885913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.886177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.886198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.892098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.892383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.892404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.898405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.898584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.898603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.904840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.904973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.904993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.911393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.911556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.911575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.917572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.917703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.917722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.923160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.923299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.923318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.928148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.928206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.928225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.932380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.932496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.932515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.936642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.936752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.936772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.941045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.941119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.941139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.945323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.945446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.945466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.949454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.949517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.949536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.954263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.954446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.954466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.191 [2024-11-19 11:38:19.959646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.191 [2024-11-19 11:38:19.959790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.191 [2024-11-19 11:38:19.959810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.452 [2024-11-19 11:38:19.966167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.452 [2024-11-19 11:38:19.966303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.452 [2024-11-19 11:38:19.966322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.452 [2024-11-19 11:38:19.972273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.452 [2024-11-19 11:38:19.972344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.452 [2024-11-19 11:38:19.972364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.452 [2024-11-19 11:38:19.977623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.452 [2024-11-19 11:38:19.977696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.452 [2024-11-19 11:38:19.977716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:19.982310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:19.982381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:19.982400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:19.987218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:19.987290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:19.987309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:19.992165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:19.992239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:19.992258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:19.996595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:19.996666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:19.996689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.002056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.002134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.002153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.006611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.006683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.006702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.011446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.011531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.011556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.017596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.017667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.017689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.022289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.022364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.022384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.026636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.026710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.026731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.031689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.031760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.031778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.036284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.036358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.036378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.040992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.041069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.041088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.046664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.046739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.046762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.051206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.051276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.051296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.055919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.056003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.056023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.060581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.060654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.060673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.065253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.065323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.065343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.069606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.069678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.069697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.073867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.073941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.073967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.077774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.077847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.077867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.081680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.081750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.081769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.085690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.085775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.085795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.089771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.089845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.089864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.093818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.093896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.093915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.097896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.453 [2024-11-19 11:38:20.097990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.453 [2024-11-19 11:38:20.098014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.453 [2024-11-19 11:38:20.103146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.103241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.103264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.107142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.107229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.107250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.111494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.111578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.111598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.116020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.116107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.116132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.120859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.120966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.120985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.124828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.124922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.124941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.129211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.129298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.129317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.133683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.133770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.133790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.137924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.138032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.138052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.141845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.141931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.141956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.145757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.145845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.145864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.149706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.149791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.149809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.153628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.153717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.153736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.157582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.157664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.157683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.161608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.161694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.161713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.165626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.165708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.165727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.169557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.169642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.169661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.173481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.173563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.173582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.177451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.177539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.177558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.181385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.181468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.181487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.185886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.185978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.185998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.190447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.190540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.190558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.194771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.194859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.194878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.199334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.199418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.199437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.203760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.203852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.203870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.208615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.454 [2024-11-19 11:38:20.208702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.454 [2024-11-19 11:38:20.208722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.454 [2024-11-19 11:38:20.213170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.455 [2024-11-19 11:38:20.213264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.455 [2024-11-19 11:38:20.213282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.455 [2024-11-19 11:38:20.217651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.455 [2024-11-19 11:38:20.217735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.455 [2024-11-19 11:38:20.217754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.455 [2024-11-19 11:38:20.221815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.455 [2024-11-19 11:38:20.221896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.455 [2024-11-19 11:38:20.221915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.455 [2024-11-19 11:38:20.226703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.455 [2024-11-19 11:38:20.226790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.455 [2024-11-19 11:38:20.226814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.716 [2024-11-19 11:38:20.231351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.716 [2024-11-19 11:38:20.231436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.716 [2024-11-19 11:38:20.231456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.716 [2024-11-19 11:38:20.236012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.716 [2024-11-19 11:38:20.236098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.716 [2024-11-19 11:38:20.236118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.716 [2024-11-19 11:38:20.240429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.716 [2024-11-19 11:38:20.240512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.716 [2024-11-19 11:38:20.240531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.716 [2024-11-19 11:38:20.245337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.716 [2024-11-19 11:38:20.245424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.716 [2024-11-19 11:38:20.245443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.716 [2024-11-19 11:38:20.249848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.249931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.249958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.253961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.254067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.254086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.257818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.257903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.257922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.261681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.261766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.261785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.265621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.265710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.265730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.269579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.269663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.269682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.273491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.273581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.273600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.277380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.277471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.277490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.281286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.281373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.281392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.285468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.285556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.285575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.290478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.290569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.290587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.294602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.294686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.294705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.298464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.298547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.298567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.302459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.302541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.302560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.306478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.306561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.306580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.310438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.310530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.310549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.314452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.314545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.314564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.318456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.318543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.318562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.322405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.322501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.322519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.326222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.326311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.326330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.330178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.330273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.330293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.334474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.334557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.334580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.339203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.339291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.339310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.343351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.343461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.343480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.347401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.347482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.347501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.351605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.351705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.351723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.717 [2024-11-19 11:38:20.355577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.717 [2024-11-19 11:38:20.355660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.717 [2024-11-19 11:38:20.355679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.359637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.359720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.359739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.363570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.363656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.363674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.367451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.367557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.367575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.371730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.371836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.371855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.376761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.376849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.376868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.380928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.381024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.381043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.384902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.385007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.385027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.388932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.389048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.389067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.392912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.392998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.393017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.396743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.396833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.396852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.400681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.400782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.400801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.404837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.404919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.404938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.409783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.409868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.409887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.414176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.414272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.414291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.718 [2024-11-19 11:38:20.418246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1951980) with pdu=0x2000166ff3c8 00:27:06.718 [2024-11-19 11:38:20.418336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.718 [2024-11-19 11:38:20.418354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.718 6543.50 IOPS, 817.94 MiB/s 00:27:06.718 Latency(us) 00:27:06.718 [2024-11-19T10:38:20.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.718 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:06.718 nvme0n1 : 2.00 6543.04 817.88 0.00 0.00 2441.28 1503.05 14246.96 00:27:06.718 [2024-11-19T10:38:20.499Z] =================================================================================================================== 00:27:06.718 [2024-11-19T10:38:20.499Z] Total : 6543.04 817.88 0.00 0.00 2441.28 1503.05 14246.96 00:27:06.718 { 00:27:06.718 "results": [ 00:27:06.718 { 00:27:06.718 "job": "nvme0n1", 00:27:06.718 "core_mask": "0x2", 00:27:06.718 "workload": "randwrite", 00:27:06.718 "status": "finished", 00:27:06.718 "queue_depth": 16, 00:27:06.718 "io_size": 131072, 00:27:06.718 "runtime": 2.003197, 00:27:06.718 "iops": 6543.040949042955, 00:27:06.718 "mibps": 817.8801186303693, 00:27:06.718 "io_failed": 0, 00:27:06.718 "io_timeout": 0, 00:27:06.718 "avg_latency_us": 2441.276653099406, 00:27:06.718 "min_latency_us": 1503.0539130434784, 00:27:06.718 "max_latency_us": 14246.95652173913 00:27:06.718 } 00:27:06.718 ], 00:27:06.718 "core_count": 1 00:27:06.718 } 00:27:06.718 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:06.718 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:06.718 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:06.718 | .driver_specific 00:27:06.718 | .nvme_error 00:27:06.718 | .status_code 00:27:06.718 | .command_transient_transport_error' 00:27:06.718 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:06.978 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 423 > 0 )) 00:27:06.978 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2411055 00:27:06.978 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2411055 ']' 00:27:06.978 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2411055 00:27:06.978 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:06.978 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:06.978 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2411055 00:27:06.978 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:06.978 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:06.978 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2411055' 00:27:06.978 killing process with pid 2411055 00:27:06.978 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2411055 00:27:06.978 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.978 00:27:06.978 Latency(us) 00:27:06.978 [2024-11-19T10:38:20.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.978 [2024-11-19T10:38:20.759Z] =================================================================================================================== 00:27:06.978 [2024-11-19T10:38:20.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.979 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2411055 00:27:07.238 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2409190 00:27:07.238 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2409190 ']' 00:27:07.238 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2409190 00:27:07.238 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:07.238 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.238 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2409190 00:27:07.238 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.238 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.238 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2409190' 00:27:07.238 killing process with pid 2409190 00:27:07.238 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2409190 00:27:07.238 11:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2409190 00:27:07.499 00:27:07.499 real 0m14.143s 00:27:07.499 user 0m26.865s 00:27:07.499 sys 0m4.746s 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.499 ************************************ 00:27:07.499 END TEST nvmf_digest_error 00:27:07.499 ************************************ 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:07.499 rmmod nvme_tcp 00:27:07.499 rmmod nvme_fabrics 00:27:07.499 rmmod nvme_keyring 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2409190 ']' 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2409190 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2409190 ']' 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2409190 00:27:07.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2409190) - No such process 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2409190 is not found' 00:27:07.499 Process with pid 2409190 is not found 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.499 11:38:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.038 11:38:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.038 00:27:10.038 real 0m36.336s 00:27:10.038 user 0m55.244s 00:27:10.038 sys 0m13.819s 00:27:10.038 11:38:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.038 11:38:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:10.038 ************************************ 00:27:10.038 END TEST nvmf_digest 00:27:10.038 ************************************ 00:27:10.038 11:38:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.039 ************************************ 00:27:10.039 START TEST nvmf_bdevperf 00:27:10.039 ************************************ 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:10.039 * Looking for test storage... 00:27:10.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:10.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.039 --rc genhtml_branch_coverage=1 00:27:10.039 --rc genhtml_function_coverage=1 00:27:10.039 --rc genhtml_legend=1 00:27:10.039 --rc geninfo_all_blocks=1 00:27:10.039 --rc geninfo_unexecuted_blocks=1 00:27:10.039 00:27:10.039 ' 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:10.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.039 --rc genhtml_branch_coverage=1 00:27:10.039 --rc genhtml_function_coverage=1 00:27:10.039 --rc genhtml_legend=1 00:27:10.039 --rc geninfo_all_blocks=1 00:27:10.039 --rc geninfo_unexecuted_blocks=1 00:27:10.039 00:27:10.039 ' 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:10.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.039 --rc genhtml_branch_coverage=1 00:27:10.039 --rc genhtml_function_coverage=1 00:27:10.039 --rc genhtml_legend=1 00:27:10.039 --rc geninfo_all_blocks=1 00:27:10.039 --rc geninfo_unexecuted_blocks=1 00:27:10.039 00:27:10.039 ' 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:10.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.039 --rc genhtml_branch_coverage=1 00:27:10.039 --rc genhtml_function_coverage=1 00:27:10.039 --rc genhtml_legend=1 00:27:10.039 --rc geninfo_all_blocks=1 00:27:10.039 --rc geninfo_unexecuted_blocks=1 00:27:10.039 00:27:10.039 ' 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.039 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:10.040 11:38:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:16.614 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:16.614 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:16.614 Found net devices under 0000:86:00.0: cvl_0_0 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:16.614 Found net devices under 0000:86:00.1: cvl_0_1 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:16.614 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:16.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:27:16.615 00:27:16.615 --- 10.0.0.2 ping statistics --- 00:27:16.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.615 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:27:16.615 00:27:16.615 --- 10.0.0.1 ping statistics --- 00:27:16.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.615 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2415069 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2415069 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2415069 ']' 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.615 [2024-11-19 11:38:29.520514] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:16.615 [2024-11-19 11:38:29.520560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.615 [2024-11-19 11:38:29.601322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:16.615 [2024-11-19 11:38:29.644375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.615 [2024-11-19 11:38:29.644412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.615 [2024-11-19 11:38:29.644419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.615 [2024-11-19 11:38:29.644425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.615 [2024-11-19 11:38:29.644430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.615 [2024-11-19 11:38:29.645802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.615 [2024-11-19 11:38:29.645910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.615 [2024-11-19 11:38:29.645911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.615 [2024-11-19 11:38:29.782630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.615 Malloc0 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.615 [2024-11-19 11:38:29.853101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:16.615 { 00:27:16.615 "params": { 00:27:16.615 "name": "Nvme$subsystem", 00:27:16.615 "trtype": "$TEST_TRANSPORT", 00:27:16.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.615 "adrfam": "ipv4", 00:27:16.615 "trsvcid": "$NVMF_PORT", 00:27:16.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.615 "hdgst": ${hdgst:-false}, 00:27:16.615 "ddgst": ${ddgst:-false} 00:27:16.615 }, 00:27:16.615 "method": "bdev_nvme_attach_controller" 00:27:16.615 } 00:27:16.615 EOF 00:27:16.615 )") 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:16.615 11:38:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:16.615 "params": { 00:27:16.615 "name": "Nvme1", 00:27:16.615 "trtype": "tcp", 00:27:16.615 "traddr": "10.0.0.2", 00:27:16.615 "adrfam": "ipv4", 00:27:16.615 "trsvcid": "4420", 00:27:16.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:16.615 "hdgst": false, 00:27:16.615 "ddgst": false 00:27:16.615 }, 00:27:16.615 "method": "bdev_nvme_attach_controller" 00:27:16.615 }' 00:27:16.615 [2024-11-19 11:38:29.903106] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:16.615 [2024-11-19 11:38:29.903150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415092 ] 00:27:16.615 [2024-11-19 11:38:29.978205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.615 [2024-11-19 11:38:30.026281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.615 Running I/O for 1 seconds... 00:27:17.552 10884.00 IOPS, 42.52 MiB/s 00:27:17.552 Latency(us) 00:27:17.552 [2024-11-19T10:38:31.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.552 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:17.552 Verification LBA range: start 0x0 length 0x4000 00:27:17.552 Nvme1n1 : 1.01 10957.03 42.80 0.00 0.00 11635.43 2421.98 13506.11 00:27:17.552 [2024-11-19T10:38:31.333Z] =================================================================================================================== 00:27:17.552 [2024-11-19T10:38:31.333Z] Total : 10957.03 42.80 0.00 0.00 11635.43 2421.98 13506.11 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2415331 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.812 { 00:27:17.812 "params": { 00:27:17.812 "name": "Nvme$subsystem", 00:27:17.812 "trtype": "$TEST_TRANSPORT", 00:27:17.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.812 "adrfam": "ipv4", 00:27:17.812 "trsvcid": "$NVMF_PORT", 00:27:17.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.812 "hdgst": ${hdgst:-false}, 00:27:17.812 "ddgst": ${ddgst:-false} 00:27:17.812 }, 00:27:17.812 "method": "bdev_nvme_attach_controller" 00:27:17.812 } 00:27:17.812 EOF 00:27:17.812 )") 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:17.812 11:38:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:17.812 "params": { 00:27:17.812 "name": "Nvme1", 00:27:17.812 "trtype": "tcp", 00:27:17.812 "traddr": "10.0.0.2", 00:27:17.812 "adrfam": "ipv4", 00:27:17.812 "trsvcid": "4420", 00:27:17.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:17.812 "hdgst": false, 00:27:17.812 "ddgst": false 00:27:17.812 }, 00:27:17.812 "method": "bdev_nvme_attach_controller" 00:27:17.812 }' 00:27:17.812 [2024-11-19 11:38:31.454921] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:17.812 [2024-11-19 11:38:31.454975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415331 ] 00:27:17.812 [2024-11-19 11:38:31.533328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.812 [2024-11-19 11:38:31.573435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.381 Running I/O for 15 seconds... 00:27:20.262 10738.00 IOPS, 41.95 MiB/s [2024-11-19T10:38:34.616Z] 10865.00 IOPS, 42.44 MiB/s [2024-11-19T10:38:34.616Z] 11:38:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2415069 00:27:20.835 11:38:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:20.835 [2024-11-19 11:38:34.420908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.835 [2024-11-19 11:38:34.420951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.420969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.835 [2024-11-19 11:38:34.420979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.420989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.835 [2024-11-19 11:38:34.420997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.835 [2024-11-19 11:38:34.421012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.835 [2024-11-19 11:38:34.421030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.835 [2024-11-19 11:38:34.421048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.835 [2024-11-19 11:38:34.421063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.835 [2024-11-19 11:38:34.421080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.835 [2024-11-19 11:38:34.421097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.835 [2024-11-19 11:38:34.421113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.835 [2024-11-19 11:38:34.421137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.835 [2024-11-19 11:38:34.421153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.835 [2024-11-19 11:38:34.421174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.835 [2024-11-19 11:38:34.421194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.835 [2024-11-19 11:38:34.421205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.836 [2024-11-19 11:38:34.421326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.836 [2024-11-19 11:38:34.421343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.836 [2024-11-19 11:38:34.421359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.836 [2024-11-19 11:38:34.421375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.836 [2024-11-19 11:38:34.421820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.836 [2024-11-19 11:38:34.421829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.421836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.421844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.421851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.421860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.421866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.421874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.421881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.421889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.421896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.421904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.421911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.421919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.421926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.421934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.421940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.837 [2024-11-19 11:38:34.422472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.837 [2024-11-19 11:38:34.422527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.837 [2024-11-19 11:38:34.422533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.422992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.422999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.423007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.423013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.423022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.423029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.423039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.423046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.423054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.423060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.423068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.838 [2024-11-19 11:38:34.423076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.423084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1317cf0 is same with the state(6) to be set 00:27:20.838 [2024-11-19 11:38:34.423093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.838 [2024-11-19 11:38:34.423098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.838 [2024-11-19 11:38:34.423105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89664 len:8 PRP1 0x0 PRP2 0x0 00:27:20.838 [2024-11-19 11:38:34.423117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.838 [2024-11-19 11:38:34.426026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.838 [2024-11-19 11:38:34.426082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.838 [2024-11-19 11:38:34.426662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.838 [2024-11-19 11:38:34.426682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.839 [2024-11-19 11:38:34.426692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.839 [2024-11-19 11:38:34.426872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.839 [2024-11-19 11:38:34.427056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.839 [2024-11-19 11:38:34.427066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.839 [2024-11-19 11:38:34.427075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.839 [2024-11-19 11:38:34.427083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.839 [2024-11-19 11:38:34.439302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.839 [2024-11-19 11:38:34.439654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.839 [2024-11-19 11:38:34.439673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.839 [2024-11-19 11:38:34.439681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.839 [2024-11-19 11:38:34.439844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.839 [2024-11-19 11:38:34.440014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.839 [2024-11-19 11:38:34.440024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.839 [2024-11-19 11:38:34.440035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.839 [2024-11-19 11:38:34.440042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.839 [2024-11-19 11:38:34.452354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.839 [2024-11-19 11:38:34.452749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.839 [2024-11-19 11:38:34.452768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.839 [2024-11-19 11:38:34.452776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.839 [2024-11-19 11:38:34.452957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.839 [2024-11-19 11:38:34.453131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.839 [2024-11-19 11:38:34.453140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.839 [2024-11-19 11:38:34.453147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.839 [2024-11-19 11:38:34.453154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.839 [2024-11-19 11:38:34.465382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.839 [2024-11-19 11:38:34.465806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.839 [2024-11-19 11:38:34.465850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.839 [2024-11-19 11:38:34.465874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.839 [2024-11-19 11:38:34.466467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.839 [2024-11-19 11:38:34.467041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.839 [2024-11-19 11:38:34.467052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.839 [2024-11-19 11:38:34.467059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.839 [2024-11-19 11:38:34.467066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.839 [2024-11-19 11:38:34.478399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.839 [2024-11-19 11:38:34.478741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.839 [2024-11-19 11:38:34.478757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.839 [2024-11-19 11:38:34.478765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.839 [2024-11-19 11:38:34.478928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.839 [2024-11-19 11:38:34.479098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.839 [2024-11-19 11:38:34.479109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.839 [2024-11-19 11:38:34.479117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.839 [2024-11-19 11:38:34.479123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.839 [2024-11-19 11:38:34.491290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.839 [2024-11-19 11:38:34.491577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.839 [2024-11-19 11:38:34.491594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.839 [2024-11-19 11:38:34.491602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.839 [2024-11-19 11:38:34.491774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.839 [2024-11-19 11:38:34.491954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.839 [2024-11-19 11:38:34.491965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.839 [2024-11-19 11:38:34.491972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.839 [2024-11-19 11:38:34.491979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.839 [2024-11-19 11:38:34.504194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.839 [2024-11-19 11:38:34.504489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.839 [2024-11-19 11:38:34.504507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.839 [2024-11-19 11:38:34.504515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.839 [2024-11-19 11:38:34.504687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.839 [2024-11-19 11:38:34.504860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.839 [2024-11-19 11:38:34.504870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.839 [2024-11-19 11:38:34.504877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.839 [2024-11-19 11:38:34.504884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.839 [2024-11-19 11:38:34.517219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.839 [2024-11-19 11:38:34.517571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.839 [2024-11-19 11:38:34.517593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.839 [2024-11-19 11:38:34.517601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.839 [2024-11-19 11:38:34.517765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.839 [2024-11-19 11:38:34.517928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.839 [2024-11-19 11:38:34.517938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.839 [2024-11-19 11:38:34.517944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.839 [2024-11-19 11:38:34.517958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.839 [2024-11-19 11:38:34.530261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.839 [2024-11-19 11:38:34.530551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.839 [2024-11-19 11:38:34.530569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.839 [2024-11-19 11:38:34.530581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.839 [2024-11-19 11:38:34.530754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.839 [2024-11-19 11:38:34.530927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.839 [2024-11-19 11:38:34.530937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.839 [2024-11-19 11:38:34.530944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.839 [2024-11-19 11:38:34.530958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.839 [2024-11-19 11:38:34.543344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.839 [2024-11-19 11:38:34.543816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.839 [2024-11-19 11:38:34.543861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.839 [2024-11-19 11:38:34.543885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.839 [2024-11-19 11:38:34.544382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.839 [2024-11-19 11:38:34.544558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.839 [2024-11-19 11:38:34.544568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.839 [2024-11-19 11:38:34.544574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.839 [2024-11-19 11:38:34.544582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.839 [2024-11-19 11:38:34.556198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.840 [2024-11-19 11:38:34.556554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.840 [2024-11-19 11:38:34.556571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.840 [2024-11-19 11:38:34.556579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.840 [2024-11-19 11:38:34.556741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.840 [2024-11-19 11:38:34.556904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.840 [2024-11-19 11:38:34.556913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.840 [2024-11-19 11:38:34.556919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.840 [2024-11-19 11:38:34.556926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.840 [2024-11-19 11:38:34.569045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.840 [2024-11-19 11:38:34.569466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.840 [2024-11-19 11:38:34.569484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.840 [2024-11-19 11:38:34.569491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.840 [2024-11-19 11:38:34.569654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.840 [2024-11-19 11:38:34.569821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.840 [2024-11-19 11:38:34.569831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.840 [2024-11-19 11:38:34.569838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.840 [2024-11-19 11:38:34.569845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.840 [2024-11-19 11:38:34.581954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.840 [2024-11-19 11:38:34.582314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.840 [2024-11-19 11:38:34.582359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.840 [2024-11-19 11:38:34.582383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.840 [2024-11-19 11:38:34.582982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.840 [2024-11-19 11:38:34.583148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.840 [2024-11-19 11:38:34.583157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.840 [2024-11-19 11:38:34.583164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.840 [2024-11-19 11:38:34.583170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.840 [2024-11-19 11:38:34.594749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.840 [2024-11-19 11:38:34.595124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.840 [2024-11-19 11:38:34.595142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.840 [2024-11-19 11:38:34.595150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.840 [2024-11-19 11:38:34.595313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.840 [2024-11-19 11:38:34.595478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.840 [2024-11-19 11:38:34.595488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.840 [2024-11-19 11:38:34.595494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.840 [2024-11-19 11:38:34.595501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:20.840 [2024-11-19 11:38:34.607837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:20.840 [2024-11-19 11:38:34.608201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.840 [2024-11-19 11:38:34.608219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:20.840 [2024-11-19 11:38:34.608228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:20.840 [2024-11-19 11:38:34.608400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:20.840 [2024-11-19 11:38:34.608572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:20.840 [2024-11-19 11:38:34.608582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:20.840 [2024-11-19 11:38:34.608593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:20.840 [2024-11-19 11:38:34.608600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.101 [2024-11-19 11:38:34.620786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.101 [2024-11-19 11:38:34.621165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.101 [2024-11-19 11:38:34.621184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.101 [2024-11-19 11:38:34.621193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.101 [2024-11-19 11:38:34.621365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.101 [2024-11-19 11:38:34.621539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.101 [2024-11-19 11:38:34.621549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.101 [2024-11-19 11:38:34.621555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.101 [2024-11-19 11:38:34.621562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.101 [2024-11-19 11:38:34.633735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.101 [2024-11-19 11:38:34.634110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.101 [2024-11-19 11:38:34.634128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.101 [2024-11-19 11:38:34.634136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.101 [2024-11-19 11:38:34.634300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.101 [2024-11-19 11:38:34.634464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.101 [2024-11-19 11:38:34.634473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.101 [2024-11-19 11:38:34.634480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.101 [2024-11-19 11:38:34.634486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.101 [2024-11-19 11:38:34.646570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.101 [2024-11-19 11:38:34.647000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.101 [2024-11-19 11:38:34.647018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.101 [2024-11-19 11:38:34.647026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.101 [2024-11-19 11:38:34.647189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.101 [2024-11-19 11:38:34.647354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.101 [2024-11-19 11:38:34.647363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.101 [2024-11-19 11:38:34.647370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.101 [2024-11-19 11:38:34.647377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.659499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.659924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.659940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.659964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.660127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.660292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.660301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.660308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.660314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.672506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.672965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.672982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.672990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.673162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.673335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.673345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.673352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.673359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.685554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.685899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.685917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.685925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.686108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.686286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.686296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.686303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.686310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.698632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.699049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.699068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.699079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.699257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.699436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.699447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.699454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.699462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.711644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.712055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.712102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.712125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.712588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.712762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.712772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.712779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.712786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.724621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.724987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.725005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.725012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.725184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.725358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.725368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.725374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.725382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.737635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.738075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.738094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.738102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.738265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.738431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.738441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.738447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.738453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.750547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.750960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.750978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.750986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.751148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.751311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.751320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.751326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.751333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.763363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.763763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.763807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.763830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.764429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.764881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.764890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.764897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.764903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.776207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.776599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.776616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.776624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.776787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.776956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.776966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.776977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.776985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.789050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.789479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.789497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.789504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.789667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.789831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.789840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.789846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.789853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.801882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.802308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.802352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.802376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.802809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.802979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.802989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.802996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.803002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.814767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.815166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.815184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.815191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.815353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.815517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.815526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.815533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.815539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.827616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.828036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.828053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.828061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.828223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.828387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.828397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.828403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.828409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.840476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.840883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.840926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.840963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.841546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.842024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.842033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.842040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.842047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.853326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.853743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.853760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.853767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.853939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.854119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.854130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.854136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.854143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.102 [2024-11-19 11:38:34.866252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.102 [2024-11-19 11:38:34.866655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.102 [2024-11-19 11:38:34.866672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.102 [2024-11-19 11:38:34.866683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.102 [2024-11-19 11:38:34.866846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.102 [2024-11-19 11:38:34.867015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.102 [2024-11-19 11:38:34.867024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.102 [2024-11-19 11:38:34.867031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.102 [2024-11-19 11:38:34.867038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.363 [2024-11-19 11:38:34.879211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.363 [2024-11-19 11:38:34.879591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.363 [2024-11-19 11:38:34.879608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.363 [2024-11-19 11:38:34.879617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.363 [2024-11-19 11:38:34.879789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.363 [2024-11-19 11:38:34.879978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.363 [2024-11-19 11:38:34.879988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.363 [2024-11-19 11:38:34.879994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.363 [2024-11-19 11:38:34.880001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.363 [2024-11-19 11:38:34.892161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.363 [2024-11-19 11:38:34.892589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.363 [2024-11-19 11:38:34.892634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.363 [2024-11-19 11:38:34.892657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.363 [2024-11-19 11:38:34.893126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.363 [2024-11-19 11:38:34.893291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.363 [2024-11-19 11:38:34.893301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.363 [2024-11-19 11:38:34.893307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.363 [2024-11-19 11:38:34.893314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.363 9158.33 IOPS, 35.77 MiB/s [2024-11-19T10:38:35.144Z] [2024-11-19 11:38:34.905066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.363 [2024-11-19 11:38:34.905397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.363 [2024-11-19 11:38:34.905414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.363 [2024-11-19 11:38:34.905422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.363 [2024-11-19 11:38:34.905584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.363 [2024-11-19 11:38:34.905751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.363 [2024-11-19 11:38:34.905760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.363 [2024-11-19 11:38:34.905766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.363 [2024-11-19 11:38:34.905773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.363 [2024-11-19 11:38:34.917870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.363 [2024-11-19 11:38:34.918192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.363 [2024-11-19 11:38:34.918209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.363 [2024-11-19 11:38:34.918217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.363 [2024-11-19 11:38:34.918380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.363 [2024-11-19 11:38:34.918543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.363 [2024-11-19 11:38:34.918553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.363 [2024-11-19 11:38:34.918559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.363 [2024-11-19 11:38:34.918565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.363 [2024-11-19 11:38:34.930800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.363 [2024-11-19 11:38:34.931243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.363 [2024-11-19 11:38:34.931260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.363 [2024-11-19 11:38:34.931269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.363 [2024-11-19 11:38:34.931441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.363 [2024-11-19 11:38:34.931613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.363 [2024-11-19 11:38:34.931622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.363 [2024-11-19 11:38:34.931629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.363 [2024-11-19 11:38:34.931636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.363 [2024-11-19 11:38:34.943989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.363 [2024-11-19 11:38:34.944361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.363 [2024-11-19 11:38:34.944405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.363 [2024-11-19 11:38:34.944428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.363 [2024-11-19 11:38:34.945021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.363 [2024-11-19 11:38:34.945435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.363 [2024-11-19 11:38:34.945444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.363 [2024-11-19 11:38:34.945458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.363 [2024-11-19 11:38:34.945466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.363 [2024-11-19 11:38:34.956920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.363 [2024-11-19 11:38:34.957351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.363 [2024-11-19 11:38:34.957397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.363 [2024-11-19 11:38:34.957420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.363 [2024-11-19 11:38:34.958018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.363 [2024-11-19 11:38:34.958192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.363 [2024-11-19 11:38:34.958202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.364 [2024-11-19 11:38:34.958209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.364 [2024-11-19 11:38:34.958215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.364 [2024-11-19 11:38:34.969803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.364 [2024-11-19 11:38:34.970229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.364 [2024-11-19 11:38:34.970274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.364 [2024-11-19 11:38:34.970297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.364 [2024-11-19 11:38:34.970878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.364 [2024-11-19 11:38:34.971332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.364 [2024-11-19 11:38:34.971343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.364 [2024-11-19 11:38:34.971349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.364 [2024-11-19 11:38:34.971355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.364 [2024-11-19 11:38:34.982670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.364 [2024-11-19 11:38:34.983095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.364 [2024-11-19 11:38:34.983140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.364 [2024-11-19 11:38:34.983163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.364 [2024-11-19 11:38:34.983559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.364 [2024-11-19 11:38:34.983723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.364 [2024-11-19 11:38:34.983732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.364 [2024-11-19 11:38:34.983738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.364 [2024-11-19 11:38:34.983744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.364 [2024-11-19 11:38:34.995579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.364 [2024-11-19 11:38:34.996005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.364 [2024-11-19 11:38:34.996022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.364 [2024-11-19 11:38:34.996029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.364 [2024-11-19 11:38:34.996193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.364 [2024-11-19 11:38:34.996356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.364 [2024-11-19 11:38:34.996365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.364 [2024-11-19 11:38:34.996372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.364 [2024-11-19 11:38:34.996378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.364 [2024-11-19 11:38:35.008455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.364 [2024-11-19 11:38:35.008872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.364 [2024-11-19 11:38:35.008888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.364 [2024-11-19 11:38:35.008897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.364 [2024-11-19 11:38:35.009066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.364 [2024-11-19 11:38:35.009231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.364 [2024-11-19 11:38:35.009241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.364 [2024-11-19 11:38:35.009247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.364 [2024-11-19 11:38:35.009253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.364 [2024-11-19 11:38:35.021324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.364 [2024-11-19 11:38:35.021719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.364 [2024-11-19 11:38:35.021736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.364 [2024-11-19 11:38:35.021743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.364 [2024-11-19 11:38:35.021907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.364 [2024-11-19 11:38:35.022076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.364 [2024-11-19 11:38:35.022086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.364 [2024-11-19 11:38:35.022092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.364 [2024-11-19 11:38:35.022099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.364 [2024-11-19 11:38:35.034171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.364 [2024-11-19 11:38:35.034585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.364 [2024-11-19 11:38:35.034624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.364 [2024-11-19 11:38:35.034658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.364 [2024-11-19 11:38:35.035224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.364 [2024-11-19 11:38:35.035399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.364 [2024-11-19 11:38:35.035409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.364 [2024-11-19 11:38:35.035416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.364 [2024-11-19 11:38:35.035422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.364 [2024-11-19 11:38:35.047060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.364 [2024-11-19 11:38:35.047481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.364 [2024-11-19 11:38:35.047528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.364 [2024-11-19 11:38:35.047551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.364 [2024-11-19 11:38:35.048143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.364 [2024-11-19 11:38:35.048318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.364 [2024-11-19 11:38:35.048328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.364 [2024-11-19 11:38:35.048337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.364 [2024-11-19 11:38:35.048344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.364 [2024-11-19 11:38:35.059959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.364 [2024-11-19 11:38:35.060316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.364 [2024-11-19 11:38:35.060360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.364 [2024-11-19 11:38:35.060383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.364 [2024-11-19 11:38:35.060978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.364 [2024-11-19 11:38:35.061563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.364 [2024-11-19 11:38:35.061586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.364 [2024-11-19 11:38:35.061592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.364 [2024-11-19 11:38:35.061599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.364 [2024-11-19 11:38:35.072750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.364 [2024-11-19 11:38:35.073130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.364 [2024-11-19 11:38:35.073148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.364 [2024-11-19 11:38:35.073155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.364 [2024-11-19 11:38:35.073318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.364 [2024-11-19 11:38:35.073486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.364 [2024-11-19 11:38:35.073495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.364 [2024-11-19 11:38:35.073502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.364 [2024-11-19 11:38:35.073509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.364 [2024-11-19 11:38:35.085592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.364 [2024-11-19 11:38:35.086012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.364 [2024-11-19 11:38:35.086056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.364 [2024-11-19 11:38:35.086080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.364 [2024-11-19 11:38:35.086328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.365 [2024-11-19 11:38:35.086493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.365 [2024-11-19 11:38:35.086502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.365 [2024-11-19 11:38:35.086508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.365 [2024-11-19 11:38:35.086515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.365 [2024-11-19 11:38:35.098497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.365 [2024-11-19 11:38:35.098914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.365 [2024-11-19 11:38:35.098931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.365 [2024-11-19 11:38:35.098939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.365 [2024-11-19 11:38:35.099108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.365 [2024-11-19 11:38:35.099273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.365 [2024-11-19 11:38:35.099281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.365 [2024-11-19 11:38:35.099288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.365 [2024-11-19 11:38:35.099294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.365 [2024-11-19 11:38:35.111365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.365 [2024-11-19 11:38:35.111723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.365 [2024-11-19 11:38:35.111740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.365 [2024-11-19 11:38:35.111747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.365 [2024-11-19 11:38:35.111909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.365 [2024-11-19 11:38:35.112078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.365 [2024-11-19 11:38:35.112088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.365 [2024-11-19 11:38:35.112097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.365 [2024-11-19 11:38:35.112104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.365 [2024-11-19 11:38:35.124168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.365 [2024-11-19 11:38:35.124565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.365 [2024-11-19 11:38:35.124582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.365 [2024-11-19 11:38:35.124590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.365 [2024-11-19 11:38:35.124754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.365 [2024-11-19 11:38:35.124918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.365 [2024-11-19 11:38:35.124927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.365 [2024-11-19 11:38:35.124933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.365 [2024-11-19 11:38:35.124939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.365 [2024-11-19 11:38:35.137157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.365 [2024-11-19 11:38:35.137514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.365 [2024-11-19 11:38:35.137530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.365 [2024-11-19 11:38:35.137539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.365 [2024-11-19 11:38:35.137711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.365 [2024-11-19 11:38:35.137883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.365 [2024-11-19 11:38:35.137893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.365 [2024-11-19 11:38:35.137900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.365 [2024-11-19 11:38:35.137907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.627 [2024-11-19 11:38:35.150054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.627 [2024-11-19 11:38:35.150483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.627 [2024-11-19 11:38:35.150526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.627 [2024-11-19 11:38:35.150550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.627 [2024-11-19 11:38:35.151145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.627 [2024-11-19 11:38:35.151704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.627 [2024-11-19 11:38:35.151714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.627 [2024-11-19 11:38:35.151720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.627 [2024-11-19 11:38:35.151726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.627 [2024-11-19 11:38:35.162882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.627 [2024-11-19 11:38:35.163302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.627 [2024-11-19 11:38:35.163345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.627 [2024-11-19 11:38:35.163369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.627 [2024-11-19 11:38:35.163773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.627 [2024-11-19 11:38:35.163936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.627 [2024-11-19 11:38:35.163945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.627 [2024-11-19 11:38:35.163959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.627 [2024-11-19 11:38:35.163966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.627 [2024-11-19 11:38:35.175723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.627 [2024-11-19 11:38:35.176092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.627 [2024-11-19 11:38:35.176110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.627 [2024-11-19 11:38:35.176118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.627 [2024-11-19 11:38:35.176281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.627 [2024-11-19 11:38:35.176445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.627 [2024-11-19 11:38:35.176454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.627 [2024-11-19 11:38:35.176460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.627 [2024-11-19 11:38:35.176467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.627 [2024-11-19 11:38:35.188559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.627 [2024-11-19 11:38:35.188906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.627 [2024-11-19 11:38:35.188924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.627 [2024-11-19 11:38:35.188932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.627 [2024-11-19 11:38:35.189110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.627 [2024-11-19 11:38:35.189284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.627 [2024-11-19 11:38:35.189294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.627 [2024-11-19 11:38:35.189301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.627 [2024-11-19 11:38:35.189307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.627 [2024-11-19 11:38:35.201649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.627 [2024-11-19 11:38:35.202084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.627 [2024-11-19 11:38:35.202137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.627 [2024-11-19 11:38:35.202169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.627 [2024-11-19 11:38:35.202716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.627 [2024-11-19 11:38:35.202902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.627 [2024-11-19 11:38:35.202912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.627 [2024-11-19 11:38:35.202919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.627 [2024-11-19 11:38:35.202926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.627 [2024-11-19 11:38:35.214689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.627 [2024-11-19 11:38:35.215113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.627 [2024-11-19 11:38:35.215158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.627 [2024-11-19 11:38:35.215184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.627 [2024-11-19 11:38:35.215750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.627 [2024-11-19 11:38:35.215925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.627 [2024-11-19 11:38:35.215935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.627 [2024-11-19 11:38:35.215941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.627 [2024-11-19 11:38:35.215954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.627 [2024-11-19 11:38:35.227547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.627 [2024-11-19 11:38:35.227981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.627 [2024-11-19 11:38:35.228026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.627 [2024-11-19 11:38:35.228049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.627 [2024-11-19 11:38:35.228552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.627 [2024-11-19 11:38:35.228716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.627 [2024-11-19 11:38:35.228725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.627 [2024-11-19 11:38:35.228732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.627 [2024-11-19 11:38:35.228738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.627 [2024-11-19 11:38:35.240456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.627 [2024-11-19 11:38:35.240874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.627 [2024-11-19 11:38:35.240915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.627 [2024-11-19 11:38:35.240940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.627 [2024-11-19 11:38:35.241536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.627 [2024-11-19 11:38:35.241862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.627 [2024-11-19 11:38:35.241872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.627 [2024-11-19 11:38:35.241878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.627 [2024-11-19 11:38:35.241885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.627 [2024-11-19 11:38:35.253296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.627 [2024-11-19 11:38:35.253730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.628 [2024-11-19 11:38:35.253774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.628 [2024-11-19 11:38:35.253797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.628 [2024-11-19 11:38:35.254392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.628 [2024-11-19 11:38:35.254984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.628 [2024-11-19 11:38:35.255019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.628 [2024-11-19 11:38:35.255026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.628 [2024-11-19 11:38:35.255033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.628 [2024-11-19 11:38:35.266186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.628 [2024-11-19 11:38:35.266548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.628 [2024-11-19 11:38:35.266591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.628 [2024-11-19 11:38:35.266614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.628 [2024-11-19 11:38:35.267110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.628 [2024-11-19 11:38:35.267275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.628 [2024-11-19 11:38:35.267284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.628 [2024-11-19 11:38:35.267290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.628 [2024-11-19 11:38:35.267297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.628 [2024-11-19 11:38:35.279016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.628 [2024-11-19 11:38:35.279362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.628 [2024-11-19 11:38:35.279380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.628 [2024-11-19 11:38:35.279387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.628 [2024-11-19 11:38:35.279550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.628 [2024-11-19 11:38:35.279714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.628 [2024-11-19 11:38:35.279723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.628 [2024-11-19 11:38:35.279734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.628 [2024-11-19 11:38:35.279741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.628 [2024-11-19 11:38:35.291832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.628 [2024-11-19 11:38:35.292228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.628 [2024-11-19 11:38:35.292274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.628 [2024-11-19 11:38:35.292298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.628 [2024-11-19 11:38:35.292803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.628 [2024-11-19 11:38:35.292973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.628 [2024-11-19 11:38:35.292983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.628 [2024-11-19 11:38:35.292990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.628 [2024-11-19 11:38:35.292999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.628 [2024-11-19 11:38:35.304769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.628 [2024-11-19 11:38:35.305178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.628 [2024-11-19 11:38:35.305196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.628 [2024-11-19 11:38:35.305204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.628 [2024-11-19 11:38:35.305367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.628 [2024-11-19 11:38:35.305531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.628 [2024-11-19 11:38:35.305540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.628 [2024-11-19 11:38:35.305547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.628 [2024-11-19 11:38:35.305553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.628 [2024-11-19 11:38:35.317630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.628 [2024-11-19 11:38:35.318052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.628 [2024-11-19 11:38:35.318069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.628 [2024-11-19 11:38:35.318076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.628 [2024-11-19 11:38:35.318239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.628 [2024-11-19 11:38:35.318402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.628 [2024-11-19 11:38:35.318412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.628 [2024-11-19 11:38:35.318418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.628 [2024-11-19 11:38:35.318425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.628 [2024-11-19 11:38:35.330499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.628 [2024-11-19 11:38:35.330826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.628 [2024-11-19 11:38:35.330842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.628 [2024-11-19 11:38:35.330849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.628 [2024-11-19 11:38:35.331018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.628 [2024-11-19 11:38:35.331182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.628 [2024-11-19 11:38:35.331191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.628 [2024-11-19 11:38:35.331198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.628 [2024-11-19 11:38:35.331204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.628 [2024-11-19 11:38:35.343460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.628 [2024-11-19 11:38:35.343869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.628 [2024-11-19 11:38:35.343886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.628 [2024-11-19 11:38:35.343893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.628 [2024-11-19 11:38:35.344081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.628 [2024-11-19 11:38:35.344255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.628 [2024-11-19 11:38:35.344265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.628 [2024-11-19 11:38:35.344271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.628 [2024-11-19 11:38:35.344278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.628 [2024-11-19 11:38:35.356376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.628 [2024-11-19 11:38:35.356786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.628 [2024-11-19 11:38:35.356830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.628 [2024-11-19 11:38:35.356854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.628 [2024-11-19 11:38:35.357342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.628 [2024-11-19 11:38:35.357507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.628 [2024-11-19 11:38:35.357516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.628 [2024-11-19 11:38:35.357523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.628 [2024-11-19 11:38:35.357529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.628 [2024-11-19 11:38:35.369246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.628 [2024-11-19 11:38:35.369650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.628 [2024-11-19 11:38:35.369667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.628 [2024-11-19 11:38:35.369678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.629 [2024-11-19 11:38:35.369841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.629 [2024-11-19 11:38:35.370011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.629 [2024-11-19 11:38:35.370022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.629 [2024-11-19 11:38:35.370028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.629 [2024-11-19 11:38:35.370034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.629 [2024-11-19 11:38:35.382101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.629 [2024-11-19 11:38:35.382496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.629 [2024-11-19 11:38:35.382512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.629 [2024-11-19 11:38:35.382520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.629 [2024-11-19 11:38:35.382682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.629 [2024-11-19 11:38:35.382845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.629 [2024-11-19 11:38:35.382855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.629 [2024-11-19 11:38:35.382861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.629 [2024-11-19 11:38:35.382867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.629 [2024-11-19 11:38:35.394953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.629 [2024-11-19 11:38:35.395369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.629 [2024-11-19 11:38:35.395385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.629 [2024-11-19 11:38:35.395393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.629 [2024-11-19 11:38:35.395555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.629 [2024-11-19 11:38:35.395718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.629 [2024-11-19 11:38:35.395727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.629 [2024-11-19 11:38:35.395733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.629 [2024-11-19 11:38:35.395740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.890 [2024-11-19 11:38:35.407841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.890 [2024-11-19 11:38:35.408335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.890 [2024-11-19 11:38:35.408374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.890 [2024-11-19 11:38:35.408399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.890 [2024-11-19 11:38:35.408997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.890 [2024-11-19 11:38:35.409455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.890 [2024-11-19 11:38:35.409465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.890 [2024-11-19 11:38:35.409472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.890 [2024-11-19 11:38:35.409479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.890 [2024-11-19 11:38:35.420745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.890 [2024-11-19 11:38:35.421140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.890 [2024-11-19 11:38:35.421158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.890 [2024-11-19 11:38:35.421166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.890 [2024-11-19 11:38:35.421328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.890 [2024-11-19 11:38:35.421492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.890 [2024-11-19 11:38:35.421500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.890 [2024-11-19 11:38:35.421506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.890 [2024-11-19 11:38:35.421512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.890 [2024-11-19 11:38:35.433530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.890 [2024-11-19 11:38:35.433942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.890 [2024-11-19 11:38:35.433964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.890 [2024-11-19 11:38:35.433972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.890 [2024-11-19 11:38:35.434134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.890 [2024-11-19 11:38:35.434298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.890 [2024-11-19 11:38:35.434307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.890 [2024-11-19 11:38:35.434314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.890 [2024-11-19 11:38:35.434320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.890 [2024-11-19 11:38:35.446366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.890 [2024-11-19 11:38:35.446763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.890 [2024-11-19 11:38:35.446781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.891 [2024-11-19 11:38:35.446789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.891 [2024-11-19 11:38:35.446968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.891 [2024-11-19 11:38:35.447143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.891 [2024-11-19 11:38:35.447153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.891 [2024-11-19 11:38:35.447163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.891 [2024-11-19 11:38:35.447171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.891 [2024-11-19 11:38:35.459491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.891 [2024-11-19 11:38:35.459759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.891 [2024-11-19 11:38:35.459783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.891 [2024-11-19 11:38:35.459791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.891 [2024-11-19 11:38:35.459972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.891 [2024-11-19 11:38:35.460152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.891 [2024-11-19 11:38:35.460161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.891 [2024-11-19 11:38:35.460168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.891 [2024-11-19 11:38:35.460175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.891 [2024-11-19 11:38:35.472510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.891 [2024-11-19 11:38:35.472917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.891 [2024-11-19 11:38:35.472934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.891 [2024-11-19 11:38:35.472942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.891 [2024-11-19 11:38:35.473121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.891 [2024-11-19 11:38:35.473295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.891 [2024-11-19 11:38:35.473305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.891 [2024-11-19 11:38:35.473311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.891 [2024-11-19 11:38:35.473318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.891 [2024-11-19 11:38:35.485302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.891 [2024-11-19 11:38:35.485741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.891 [2024-11-19 11:38:35.485785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.891 [2024-11-19 11:38:35.485809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.891 [2024-11-19 11:38:35.486256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.891 [2024-11-19 11:38:35.486430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.891 [2024-11-19 11:38:35.486440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.891 [2024-11-19 11:38:35.486447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.891 [2024-11-19 11:38:35.486453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.891 [2024-11-19 11:38:35.498114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.891 [2024-11-19 11:38:35.498479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.891 [2024-11-19 11:38:35.498522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.891 [2024-11-19 11:38:35.498545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.891 [2024-11-19 11:38:35.499138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.891 [2024-11-19 11:38:35.499436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.891 [2024-11-19 11:38:35.499445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.891 [2024-11-19 11:38:35.499452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.891 [2024-11-19 11:38:35.499459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.891 [2024-11-19 11:38:35.510910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.891 [2024-11-19 11:38:35.511329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.891 [2024-11-19 11:38:35.511374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.891 [2024-11-19 11:38:35.511397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.891 [2024-11-19 11:38:35.511991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.891 [2024-11-19 11:38:35.512560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.891 [2024-11-19 11:38:35.512569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.891 [2024-11-19 11:38:35.512575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.891 [2024-11-19 11:38:35.512582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.891 [2024-11-19 11:38:35.526123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.891 [2024-11-19 11:38:35.526640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.891 [2024-11-19 11:38:35.526663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.891 [2024-11-19 11:38:35.526674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.891 [2024-11-19 11:38:35.526929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.891 [2024-11-19 11:38:35.527190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.891 [2024-11-19 11:38:35.527204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.891 [2024-11-19 11:38:35.527214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.891 [2024-11-19 11:38:35.527225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.891 [2024-11-19 11:38:35.539153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.891 [2024-11-19 11:38:35.539577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.891 [2024-11-19 11:38:35.539594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.891 [2024-11-19 11:38:35.539606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.891 [2024-11-19 11:38:35.539778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.891 [2024-11-19 11:38:35.539957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.891 [2024-11-19 11:38:35.539967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.891 [2024-11-19 11:38:35.539974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.891 [2024-11-19 11:38:35.539981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.891 [2024-11-19 11:38:35.552150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.891 [2024-11-19 11:38:35.552551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.891 [2024-11-19 11:38:35.552567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.891 [2024-11-19 11:38:35.552576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.891 [2024-11-19 11:38:35.552738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.891 [2024-11-19 11:38:35.552901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.891 [2024-11-19 11:38:35.552911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.891 [2024-11-19 11:38:35.552918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.891 [2024-11-19 11:38:35.552924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.891 [2024-11-19 11:38:35.564959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.891 [2024-11-19 11:38:35.565304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.891 [2024-11-19 11:38:35.565320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.891 [2024-11-19 11:38:35.565328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.891 [2024-11-19 11:38:35.565491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.891 [2024-11-19 11:38:35.565655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.891 [2024-11-19 11:38:35.565664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.891 [2024-11-19 11:38:35.565671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.891 [2024-11-19 11:38:35.565678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.891 [2024-11-19 11:38:35.577789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.892 [2024-11-19 11:38:35.578223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.892 [2024-11-19 11:38:35.578267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.892 [2024-11-19 11:38:35.578290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.892 [2024-11-19 11:38:35.578768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.892 [2024-11-19 11:38:35.578940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.892 [2024-11-19 11:38:35.578955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.892 [2024-11-19 11:38:35.578962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.892 [2024-11-19 11:38:35.578968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.892 [2024-11-19 11:38:35.590599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.892 [2024-11-19 11:38:35.591030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.892 [2024-11-19 11:38:35.591077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.892 [2024-11-19 11:38:35.591101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.892 [2024-11-19 11:38:35.591681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.892 [2024-11-19 11:38:35.592045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.892 [2024-11-19 11:38:35.592055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.892 [2024-11-19 11:38:35.592061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.892 [2024-11-19 11:38:35.592068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.892 [2024-11-19 11:38:35.603392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.892 [2024-11-19 11:38:35.603736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.892 [2024-11-19 11:38:35.603753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.892 [2024-11-19 11:38:35.603760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.892 [2024-11-19 11:38:35.603923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.892 [2024-11-19 11:38:35.604092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.892 [2024-11-19 11:38:35.604103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.892 [2024-11-19 11:38:35.604109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.892 [2024-11-19 11:38:35.604115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.892 [2024-11-19 11:38:35.616191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.892 [2024-11-19 11:38:35.616620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.892 [2024-11-19 11:38:35.616637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.892 [2024-11-19 11:38:35.616645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.892 [2024-11-19 11:38:35.616808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.892 [2024-11-19 11:38:35.616978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.892 [2024-11-19 11:38:35.616989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.892 [2024-11-19 11:38:35.617000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.892 [2024-11-19 11:38:35.617007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.892 [2024-11-19 11:38:35.629088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.892 [2024-11-19 11:38:35.629523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.892 [2024-11-19 11:38:35.629570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.892 [2024-11-19 11:38:35.629596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.892 [2024-11-19 11:38:35.630089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.892 [2024-11-19 11:38:35.630264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.892 [2024-11-19 11:38:35.630274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.892 [2024-11-19 11:38:35.630280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.892 [2024-11-19 11:38:35.630287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.892 [2024-11-19 11:38:35.642024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.892 [2024-11-19 11:38:35.642424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.892 [2024-11-19 11:38:35.642441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.892 [2024-11-19 11:38:35.642449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.892 [2024-11-19 11:38:35.642633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.892 [2024-11-19 11:38:35.642807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.892 [2024-11-19 11:38:35.642817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.892 [2024-11-19 11:38:35.642824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.892 [2024-11-19 11:38:35.642830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.892 [2024-11-19 11:38:35.654939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.892 [2024-11-19 11:38:35.655389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.892 [2024-11-19 11:38:35.655434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:21.892 [2024-11-19 11:38:35.655458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:21.892 [2024-11-19 11:38:35.655959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:21.892 [2024-11-19 11:38:35.656124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.892 [2024-11-19 11:38:35.656134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.892 [2024-11-19 11:38:35.656140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.892 [2024-11-19 11:38:35.656147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.153 [2024-11-19 11:38:35.667903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.153 [2024-11-19 11:38:35.668343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.153 [2024-11-19 11:38:35.668360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.153 [2024-11-19 11:38:35.668368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.153 [2024-11-19 11:38:35.668541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.153 [2024-11-19 11:38:35.668714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.153 [2024-11-19 11:38:35.668724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.153 [2024-11-19 11:38:35.668731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.153 [2024-11-19 11:38:35.668737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.153 [2024-11-19 11:38:35.680748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.153 [2024-11-19 11:38:35.681166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.153 [2024-11-19 11:38:35.681183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.153 [2024-11-19 11:38:35.681191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.153 [2024-11-19 11:38:35.681354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.153 [2024-11-19 11:38:35.681518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.153 [2024-11-19 11:38:35.681527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.153 [2024-11-19 11:38:35.681533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.153 [2024-11-19 11:38:35.681540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.153 [2024-11-19 11:38:35.693620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.153 [2024-11-19 11:38:35.694034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.153 [2024-11-19 11:38:35.694051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.153 [2024-11-19 11:38:35.694059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.153 [2024-11-19 11:38:35.694222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.153 [2024-11-19 11:38:35.694384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.153 [2024-11-19 11:38:35.694394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.153 [2024-11-19 11:38:35.694400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.153 [2024-11-19 11:38:35.694406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.153 [2024-11-19 11:38:35.706487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.153 [2024-11-19 11:38:35.706899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.153 [2024-11-19 11:38:35.706916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.153 [2024-11-19 11:38:35.706927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.153 [2024-11-19 11:38:35.707106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.153 [2024-11-19 11:38:35.707280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.153 [2024-11-19 11:38:35.707290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.153 [2024-11-19 11:38:35.707296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.153 [2024-11-19 11:38:35.707303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.153 [2024-11-19 11:38:35.719662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.153 [2024-11-19 11:38:35.720028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.153 [2024-11-19 11:38:35.720047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.153 [2024-11-19 11:38:35.720055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.153 [2024-11-19 11:38:35.720233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.153 [2024-11-19 11:38:35.720413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.153 [2024-11-19 11:38:35.720423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.153 [2024-11-19 11:38:35.720429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.153 [2024-11-19 11:38:35.720436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.153 [2024-11-19 11:38:35.732578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.154 [2024-11-19 11:38:35.732992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.154 [2024-11-19 11:38:35.733009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.154 [2024-11-19 11:38:35.733017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.154 [2024-11-19 11:38:35.733180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.154 [2024-11-19 11:38:35.733344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.154 [2024-11-19 11:38:35.733353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.154 [2024-11-19 11:38:35.733359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.154 [2024-11-19 11:38:35.733366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.154 [2024-11-19 11:38:35.745691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.154 [2024-11-19 11:38:35.746085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.154 [2024-11-19 11:38:35.746103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.154 [2024-11-19 11:38:35.746111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.154 [2024-11-19 11:38:35.746276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.154 [2024-11-19 11:38:35.746443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.154 [2024-11-19 11:38:35.746453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.154 [2024-11-19 11:38:35.746459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.154 [2024-11-19 11:38:35.746466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.154 [2024-11-19 11:38:35.758692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.154 [2024-11-19 11:38:35.759119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.154 [2024-11-19 11:38:35.759138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.154 [2024-11-19 11:38:35.759146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.154 [2024-11-19 11:38:35.759330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.154 [2024-11-19 11:38:35.759494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.154 [2024-11-19 11:38:35.759504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.154 [2024-11-19 11:38:35.759510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.154 [2024-11-19 11:38:35.759516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.154 [2024-11-19 11:38:35.771605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.154 [2024-11-19 11:38:35.771902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.154 [2024-11-19 11:38:35.771960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.154 [2024-11-19 11:38:35.771985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.154 [2024-11-19 11:38:35.772565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.154 [2024-11-19 11:38:35.773164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.154 [2024-11-19 11:38:35.773174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.154 [2024-11-19 11:38:35.773180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.154 [2024-11-19 11:38:35.773187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.154 [2024-11-19 11:38:35.784497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.154 [2024-11-19 11:38:35.784807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.154 [2024-11-19 11:38:35.784823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.154 [2024-11-19 11:38:35.784831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.154 [2024-11-19 11:38:35.784999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.154 [2024-11-19 11:38:35.785163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.154 [2024-11-19 11:38:35.785173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.154 [2024-11-19 11:38:35.785183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.154 [2024-11-19 11:38:35.785190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.154 [2024-11-19 11:38:35.797430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.154 [2024-11-19 11:38:35.797779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.154 [2024-11-19 11:38:35.797824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.154 [2024-11-19 11:38:35.797849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.154 [2024-11-19 11:38:35.798331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.154 [2024-11-19 11:38:35.798497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.154 [2024-11-19 11:38:35.798506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.154 [2024-11-19 11:38:35.798512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.154 [2024-11-19 11:38:35.798519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.154 [2024-11-19 11:38:35.810625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.154 [2024-11-19 11:38:35.810980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.154 [2024-11-19 11:38:35.810999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.154 [2024-11-19 11:38:35.811007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.154 [2024-11-19 11:38:35.811179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.154 [2024-11-19 11:38:35.811353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.154 [2024-11-19 11:38:35.811364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.154 [2024-11-19 11:38:35.811370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.154 [2024-11-19 11:38:35.811377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.154 [2024-11-19 11:38:35.823512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.154 [2024-11-19 11:38:35.823902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.154 [2024-11-19 11:38:35.823919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.154 [2024-11-19 11:38:35.823927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.154 [2024-11-19 11:38:35.824095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.154 [2024-11-19 11:38:35.824261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.154 [2024-11-19 11:38:35.824270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.154 [2024-11-19 11:38:35.824276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.154 [2024-11-19 11:38:35.824283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.154 [2024-11-19 11:38:35.836390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.154 [2024-11-19 11:38:35.836695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.154 [2024-11-19 11:38:35.836712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.154 [2024-11-19 11:38:35.836720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.154 [2024-11-19 11:38:35.836892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.154 [2024-11-19 11:38:35.837070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.154 [2024-11-19 11:38:35.837081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.154 [2024-11-19 11:38:35.837088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.154 [2024-11-19 11:38:35.837094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.154 [2024-11-19 11:38:35.849316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.154 [2024-11-19 11:38:35.849638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.154 [2024-11-19 11:38:35.849655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.154 [2024-11-19 11:38:35.849662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.154 [2024-11-19 11:38:35.849825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.154 [2024-11-19 11:38:35.849995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.154 [2024-11-19 11:38:35.850005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.155 [2024-11-19 11:38:35.850011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.155 [2024-11-19 11:38:35.850018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.155 [2024-11-19 11:38:35.862162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.155 [2024-11-19 11:38:35.862529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.155 [2024-11-19 11:38:35.862547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.155 [2024-11-19 11:38:35.862554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.155 [2024-11-19 11:38:35.862726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.155 [2024-11-19 11:38:35.862899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.155 [2024-11-19 11:38:35.862909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.155 [2024-11-19 11:38:35.862915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.155 [2024-11-19 11:38:35.862922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.155 [2024-11-19 11:38:35.875041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.155 [2024-11-19 11:38:35.875365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.155 [2024-11-19 11:38:35.875381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.155 [2024-11-19 11:38:35.875392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.155 [2024-11-19 11:38:35.875556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.155 [2024-11-19 11:38:35.875720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.155 [2024-11-19 11:38:35.875729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.155 [2024-11-19 11:38:35.875735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.155 [2024-11-19 11:38:35.875742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.155 [2024-11-19 11:38:35.887921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.155 [2024-11-19 11:38:35.888265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.155 [2024-11-19 11:38:35.888281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.155 [2024-11-19 11:38:35.888290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.155 [2024-11-19 11:38:35.888453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.155 [2024-11-19 11:38:35.888616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.155 [2024-11-19 11:38:35.888626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.155 [2024-11-19 11:38:35.888632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.155 [2024-11-19 11:38:35.888639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.155 6868.75 IOPS, 26.83 MiB/s [2024-11-19T10:38:35.936Z] [2024-11-19 11:38:35.901934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.155 [2024-11-19 11:38:35.902310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.155 [2024-11-19 11:38:35.902327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.155 [2024-11-19 11:38:35.902335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.155 [2024-11-19 11:38:35.902498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.155 [2024-11-19 11:38:35.902662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.155 [2024-11-19 11:38:35.902671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.155 [2024-11-19 11:38:35.902678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.155 [2024-11-19 11:38:35.902684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.155 [2024-11-19 11:38:35.914768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.155 [2024-11-19 11:38:35.915120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.155 [2024-11-19 11:38:35.915137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.155 [2024-11-19 11:38:35.915145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.155 [2024-11-19 11:38:35.915308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.155 [2024-11-19 11:38:35.915476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.155 [2024-11-19 11:38:35.915486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.155 [2024-11-19 11:38:35.915492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.155 [2024-11-19 11:38:35.915499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.155 [2024-11-19 11:38:35.927767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.155 [2024-11-19 11:38:35.928177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.155 [2024-11-19 11:38:35.928222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.155 [2024-11-19 11:38:35.928245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.155 [2024-11-19 11:38:35.928658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.155 [2024-11-19 11:38:35.928831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.155 [2024-11-19 11:38:35.928841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.155 [2024-11-19 11:38:35.928848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.155 [2024-11-19 11:38:35.928854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.416 [2024-11-19 11:38:35.940602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.416 [2024-11-19 11:38:35.940924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.416 [2024-11-19 11:38:35.940940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.416 [2024-11-19 11:38:35.940954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.416 [2024-11-19 11:38:35.941117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.416 [2024-11-19 11:38:35.941280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.416 [2024-11-19 11:38:35.941289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.416 [2024-11-19 11:38:35.941295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.416 [2024-11-19 11:38:35.941302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.416 [2024-11-19 11:38:35.953433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.416 [2024-11-19 11:38:35.953881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.416 [2024-11-19 11:38:35.953925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.416 [2024-11-19 11:38:35.953965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.416 [2024-11-19 11:38:35.954488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.416 [2024-11-19 11:38:35.954653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.416 [2024-11-19 11:38:35.954663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.416 [2024-11-19 11:38:35.954674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.416 [2024-11-19 11:38:35.954682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.416 [2024-11-19 11:38:35.966270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.416 [2024-11-19 11:38:35.966698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.416 [2024-11-19 11:38:35.966716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.416 [2024-11-19 11:38:35.966725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.416 [2024-11-19 11:38:35.966903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.416 [2024-11-19 11:38:35.967089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.416 [2024-11-19 11:38:35.967099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.416 [2024-11-19 11:38:35.967106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.416 [2024-11-19 11:38:35.967113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.416 [2024-11-19 11:38:35.979347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.416 [2024-11-19 11:38:35.979780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.416 [2024-11-19 11:38:35.979820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.416 [2024-11-19 11:38:35.979846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.416 [2024-11-19 11:38:35.980441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.417 [2024-11-19 11:38:35.980658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.417 [2024-11-19 11:38:35.980668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.417 [2024-11-19 11:38:35.980675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.417 [2024-11-19 11:38:35.980681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.417 [2024-11-19 11:38:35.992205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.417 [2024-11-19 11:38:35.992639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.417 [2024-11-19 11:38:35.992690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.417 [2024-11-19 11:38:35.992714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.417 [2024-11-19 11:38:35.993309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.417 [2024-11-19 11:38:35.993865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.417 [2024-11-19 11:38:35.993874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.417 [2024-11-19 11:38:35.993880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.417 [2024-11-19 11:38:35.993887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.417 [2024-11-19 11:38:36.005085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.417 [2024-11-19 11:38:36.005374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.417 [2024-11-19 11:38:36.005391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.417 [2024-11-19 11:38:36.005398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.417 [2024-11-19 11:38:36.005561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.417 [2024-11-19 11:38:36.005724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.417 [2024-11-19 11:38:36.005733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.417 [2024-11-19 11:38:36.005740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.417 [2024-11-19 11:38:36.005746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.417 [2024-11-19 11:38:36.018000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.417 [2024-11-19 11:38:36.018335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.417 [2024-11-19 11:38:36.018378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.417 [2024-11-19 11:38:36.018402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.417 [2024-11-19 11:38:36.018863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.417 [2024-11-19 11:38:36.019034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.417 [2024-11-19 11:38:36.019045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.417 [2024-11-19 11:38:36.019051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.417 [2024-11-19 11:38:36.019058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.417 [2024-11-19 11:38:36.030850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.417 [2024-11-19 11:38:36.031200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.417 [2024-11-19 11:38:36.031217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.417 [2024-11-19 11:38:36.031225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.417 [2024-11-19 11:38:36.031388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.417 [2024-11-19 11:38:36.031551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.417 [2024-11-19 11:38:36.031561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.417 [2024-11-19 11:38:36.031567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.417 [2024-11-19 11:38:36.031573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.417 [2024-11-19 11:38:36.043841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.417 [2024-11-19 11:38:36.044269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.417 [2024-11-19 11:38:36.044287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.417 [2024-11-19 11:38:36.044298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.417 [2024-11-19 11:38:36.044498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.417 [2024-11-19 11:38:36.044672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.417 [2024-11-19 11:38:36.044682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.417 [2024-11-19 11:38:36.044689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.417 [2024-11-19 11:38:36.044696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.417 [2024-11-19 11:38:36.056808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.417 [2024-11-19 11:38:36.057177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.417 [2024-11-19 11:38:36.057195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.417 [2024-11-19 11:38:36.057203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.417 [2024-11-19 11:38:36.057375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.417 [2024-11-19 11:38:36.057548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.417 [2024-11-19 11:38:36.057559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.417 [2024-11-19 11:38:36.057565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.417 [2024-11-19 11:38:36.057572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.417 [2024-11-19 11:38:36.069740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.417 [2024-11-19 11:38:36.070134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.417 [2024-11-19 11:38:36.070152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.417 [2024-11-19 11:38:36.070159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.417 [2024-11-19 11:38:36.070322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.417 [2024-11-19 11:38:36.070486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.417 [2024-11-19 11:38:36.070496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.417 [2024-11-19 11:38:36.070502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.417 [2024-11-19 11:38:36.070508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.417 [2024-11-19 11:38:36.082618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.417 [2024-11-19 11:38:36.082937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.417 [2024-11-19 11:38:36.082960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.417 [2024-11-19 11:38:36.082969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.417 [2024-11-19 11:38:36.083132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.417 [2024-11-19 11:38:36.083299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.417 [2024-11-19 11:38:36.083309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.417 [2024-11-19 11:38:36.083316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.417 [2024-11-19 11:38:36.083322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.417 [2024-11-19 11:38:36.095434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.417 [2024-11-19 11:38:36.095835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.417 [2024-11-19 11:38:36.095879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.417 [2024-11-19 11:38:36.095903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.417 [2024-11-19 11:38:36.096496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.417 [2024-11-19 11:38:36.097019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.417 [2024-11-19 11:38:36.097029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.417 [2024-11-19 11:38:36.097035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.417 [2024-11-19 11:38:36.097043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.417 [2024-11-19 11:38:36.108377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.418 [2024-11-19 11:38:36.108817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.418 [2024-11-19 11:38:36.108862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.418 [2024-11-19 11:38:36.108886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.418 [2024-11-19 11:38:36.109480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.418 [2024-11-19 11:38:36.109904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.418 [2024-11-19 11:38:36.109922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.418 [2024-11-19 11:38:36.109937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.418 [2024-11-19 11:38:36.109960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.418 [2024-11-19 11:38:36.123292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.418 [2024-11-19 11:38:36.123787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.418 [2024-11-19 11:38:36.123810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.418 [2024-11-19 11:38:36.123821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.418 [2024-11-19 11:38:36.124084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.418 [2024-11-19 11:38:36.124342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.418 [2024-11-19 11:38:36.124355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.418 [2024-11-19 11:38:36.124370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.418 [2024-11-19 11:38:36.124380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.418 [2024-11-19 11:38:36.136261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.418 [2024-11-19 11:38:36.136547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.418 [2024-11-19 11:38:36.136565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.418 [2024-11-19 11:38:36.136573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.418 [2024-11-19 11:38:36.136746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.418 [2024-11-19 11:38:36.136918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.418 [2024-11-19 11:38:36.136928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.418 [2024-11-19 11:38:36.136935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.418 [2024-11-19 11:38:36.136942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.418 [2024-11-19 11:38:36.149115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.418 [2024-11-19 11:38:36.149459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.418 [2024-11-19 11:38:36.149475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.418 [2024-11-19 11:38:36.149483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.418 [2024-11-19 11:38:36.149646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.418 [2024-11-19 11:38:36.149809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.418 [2024-11-19 11:38:36.149819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.418 [2024-11-19 11:38:36.149825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.418 [2024-11-19 11:38:36.149832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.418 [2024-11-19 11:38:36.162041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.418 [2024-11-19 11:38:36.162323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.418 [2024-11-19 11:38:36.162340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.418 [2024-11-19 11:38:36.162348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.418 [2024-11-19 11:38:36.162511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.418 [2024-11-19 11:38:36.162675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.418 [2024-11-19 11:38:36.162685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.418 [2024-11-19 11:38:36.162692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.418 [2024-11-19 11:38:36.162698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.418 [2024-11-19 11:38:36.174952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.418 [2024-11-19 11:38:36.175306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.418 [2024-11-19 11:38:36.175322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.418 [2024-11-19 11:38:36.175330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.418 [2024-11-19 11:38:36.175493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.418 [2024-11-19 11:38:36.175656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.418 [2024-11-19 11:38:36.175666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.418 [2024-11-19 11:38:36.175672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.418 [2024-11-19 11:38:36.175679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.418 [2024-11-19 11:38:36.187761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.418 [2024-11-19 11:38:36.188153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.418 [2024-11-19 11:38:36.188170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.418 [2024-11-19 11:38:36.188178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.418 [2024-11-19 11:38:36.188349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.418 [2024-11-19 11:38:36.188522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.418 [2024-11-19 11:38:36.188531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.418 [2024-11-19 11:38:36.188538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.418 [2024-11-19 11:38:36.188545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.702 [2024-11-19 11:38:36.200763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.702 [2024-11-19 11:38:36.201192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.702 [2024-11-19 11:38:36.201237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.702 [2024-11-19 11:38:36.201260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.702 [2024-11-19 11:38:36.201841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.702 [2024-11-19 11:38:36.202262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.702 [2024-11-19 11:38:36.202272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.702 [2024-11-19 11:38:36.202278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.702 [2024-11-19 11:38:36.202286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.702 [2024-11-19 11:38:36.213629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.702 [2024-11-19 11:38:36.214054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.702 [2024-11-19 11:38:36.214100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.702 [2024-11-19 11:38:36.214140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.702 [2024-11-19 11:38:36.214721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.702 [2024-11-19 11:38:36.215313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.702 [2024-11-19 11:38:36.215323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.702 [2024-11-19 11:38:36.215330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.702 [2024-11-19 11:38:36.215336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.702 [2024-11-19 11:38:36.226709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.702 [2024-11-19 11:38:36.227007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.702 [2024-11-19 11:38:36.227026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.702 [2024-11-19 11:38:36.227034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.702 [2024-11-19 11:38:36.227211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.702 [2024-11-19 11:38:36.227389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.702 [2024-11-19 11:38:36.227399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.702 [2024-11-19 11:38:36.227406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.702 [2024-11-19 11:38:36.227414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.702 [2024-11-19 11:38:36.239728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.702 [2024-11-19 11:38:36.240185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.702 [2024-11-19 11:38:36.240231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.702 [2024-11-19 11:38:36.240255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.702 [2024-11-19 11:38:36.240840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.702 [2024-11-19 11:38:36.241238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.702 [2024-11-19 11:38:36.241257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.702 [2024-11-19 11:38:36.241272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.702 [2024-11-19 11:38:36.241286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.702 [2024-11-19 11:38:36.254687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.703 [2024-11-19 11:38:36.255188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.703 [2024-11-19 11:38:36.255232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.703 [2024-11-19 11:38:36.255255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.703 [2024-11-19 11:38:36.255834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.703 [2024-11-19 11:38:36.256404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.703 [2024-11-19 11:38:36.256418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.703 [2024-11-19 11:38:36.256428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.703 [2024-11-19 11:38:36.256438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.703 [2024-11-19 11:38:36.267663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.703 [2024-11-19 11:38:36.268059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.703 [2024-11-19 11:38:36.268077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.703 [2024-11-19 11:38:36.268085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.703 [2024-11-19 11:38:36.268253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.703 [2024-11-19 11:38:36.268422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.703 [2024-11-19 11:38:36.268431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.703 [2024-11-19 11:38:36.268438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.703 [2024-11-19 11:38:36.268445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.703 [2024-11-19 11:38:36.280537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.703 [2024-11-19 11:38:36.280968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.703 [2024-11-19 11:38:36.281014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.703 [2024-11-19 11:38:36.281037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.703 [2024-11-19 11:38:36.281450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.703 [2024-11-19 11:38:36.281615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.703 [2024-11-19 11:38:36.281624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.703 [2024-11-19 11:38:36.281631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.703 [2024-11-19 11:38:36.281637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.703 [2024-11-19 11:38:36.293493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.703 [2024-11-19 11:38:36.293915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.703 [2024-11-19 11:38:36.293971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.703 [2024-11-19 11:38:36.293996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.703 [2024-11-19 11:38:36.294449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.703 [2024-11-19 11:38:36.294614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.703 [2024-11-19 11:38:36.294624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.703 [2024-11-19 11:38:36.294635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.703 [2024-11-19 11:38:36.294644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.703 [2024-11-19 11:38:36.306283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.703 [2024-11-19 11:38:36.306655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.703 [2024-11-19 11:38:36.306672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.703 [2024-11-19 11:38:36.306680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.703 [2024-11-19 11:38:36.306843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.703 [2024-11-19 11:38:36.307014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.703 [2024-11-19 11:38:36.307024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.703 [2024-11-19 11:38:36.307030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.703 [2024-11-19 11:38:36.307037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.703 [2024-11-19 11:38:36.319138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.703 [2024-11-19 11:38:36.319513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.703 [2024-11-19 11:38:36.319530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.703 [2024-11-19 11:38:36.319537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.703 [2024-11-19 11:38:36.319700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.703 [2024-11-19 11:38:36.319864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.703 [2024-11-19 11:38:36.319873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.703 [2024-11-19 11:38:36.319880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.703 [2024-11-19 11:38:36.319887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.703 [2024-11-19 11:38:36.332067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.703 [2024-11-19 11:38:36.332437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.703 [2024-11-19 11:38:36.332453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.703 [2024-11-19 11:38:36.332461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.703 [2024-11-19 11:38:36.332625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.703 [2024-11-19 11:38:36.332789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.703 [2024-11-19 11:38:36.332798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.703 [2024-11-19 11:38:36.332805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.703 [2024-11-19 11:38:36.332811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.703 [2024-11-19 11:38:36.345023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.703 [2024-11-19 11:38:36.345437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.704 [2024-11-19 11:38:36.345453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.704 [2024-11-19 11:38:36.345460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.704 [2024-11-19 11:38:36.345622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.704 [2024-11-19 11:38:36.345786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.704 [2024-11-19 11:38:36.345795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.704 [2024-11-19 11:38:36.345801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.704 [2024-11-19 11:38:36.345809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.704 [2024-11-19 11:38:36.358006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.704 [2024-11-19 11:38:36.358420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.704 [2024-11-19 11:38:36.358437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.704 [2024-11-19 11:38:36.358444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.704 [2024-11-19 11:38:36.358607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.704 [2024-11-19 11:38:36.358771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.704 [2024-11-19 11:38:36.358781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.704 [2024-11-19 11:38:36.358787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.704 [2024-11-19 11:38:36.358793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.704 [2024-11-19 11:38:36.370817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.704 [2024-11-19 11:38:36.371239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.704 [2024-11-19 11:38:36.371284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.704 [2024-11-19 11:38:36.371307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.704 [2024-11-19 11:38:36.371722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.704 [2024-11-19 11:38:36.371886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.704 [2024-11-19 11:38:36.371896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.704 [2024-11-19 11:38:36.371902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.704 [2024-11-19 11:38:36.371909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.704 [2024-11-19 11:38:36.385838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.704 [2024-11-19 11:38:36.386360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.704 [2024-11-19 11:38:36.386382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.704 [2024-11-19 11:38:36.386397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.704 [2024-11-19 11:38:36.386649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.704 [2024-11-19 11:38:36.386904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.704 [2024-11-19 11:38:36.386917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.704 [2024-11-19 11:38:36.386927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.704 [2024-11-19 11:38:36.386937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.704 [2024-11-19 11:38:36.398832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.704 [2024-11-19 11:38:36.399259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.704 [2024-11-19 11:38:36.399277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.704 [2024-11-19 11:38:36.399284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.704 [2024-11-19 11:38:36.399451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.704 [2024-11-19 11:38:36.399620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.704 [2024-11-19 11:38:36.399630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.704 [2024-11-19 11:38:36.399636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.704 [2024-11-19 11:38:36.399643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.704 [2024-11-19 11:38:36.411644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.704 [2024-11-19 11:38:36.412035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.704 [2024-11-19 11:38:36.412052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.704 [2024-11-19 11:38:36.412060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.704 [2024-11-19 11:38:36.412222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.704 [2024-11-19 11:38:36.412386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.704 [2024-11-19 11:38:36.412396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.704 [2024-11-19 11:38:36.412403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.704 [2024-11-19 11:38:36.412410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.704 [2024-11-19 11:38:36.424488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.704 [2024-11-19 11:38:36.424830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.704 [2024-11-19 11:38:36.424845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.704 [2024-11-19 11:38:36.424853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.704 [2024-11-19 11:38:36.425023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.704 [2024-11-19 11:38:36.425190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.704 [2024-11-19 11:38:36.425198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.704 [2024-11-19 11:38:36.425205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.704 [2024-11-19 11:38:36.425210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.704 [2024-11-19 11:38:36.437316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.704 [2024-11-19 11:38:36.437721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.704 [2024-11-19 11:38:36.437738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.705 [2024-11-19 11:38:36.437746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.705 [2024-11-19 11:38:36.437918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.705 [2024-11-19 11:38:36.438099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.705 [2024-11-19 11:38:36.438110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.705 [2024-11-19 11:38:36.438117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.705 [2024-11-19 11:38:36.438124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.705 [2024-11-19 11:38:36.450246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.705 [2024-11-19 11:38:36.450662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.705 [2024-11-19 11:38:36.450679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:22.705 [2024-11-19 11:38:36.450687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:22.705 [2024-11-19 11:38:36.450877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:22.705 [2024-11-19 11:38:36.451064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.705 [2024-11-19 11:38:36.451074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.705 [2024-11-19 11:38:36.451081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.705 [2024-11-19 11:38:36.451088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.012 [2024-11-19 11:38:36.463451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.012 [2024-11-19 11:38:36.463869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.012 [2024-11-19 11:38:36.463886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.012 [2024-11-19 11:38:36.463895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.012 [2024-11-19 11:38:36.464081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.012 [2024-11-19 11:38:36.464261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.012 [2024-11-19 11:38:36.464272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.012 [2024-11-19 11:38:36.464282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.012 [2024-11-19 11:38:36.464290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.012 [2024-11-19 11:38:36.476656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.012 [2024-11-19 11:38:36.477017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.012 [2024-11-19 11:38:36.477036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.012 [2024-11-19 11:38:36.477045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.012 [2024-11-19 11:38:36.477222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.012 [2024-11-19 11:38:36.477400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.012 [2024-11-19 11:38:36.477410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.012 [2024-11-19 11:38:36.477418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.012 [2024-11-19 11:38:36.477425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.012 [2024-11-19 11:38:36.489826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.012 [2024-11-19 11:38:36.490252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.012 [2024-11-19 11:38:36.490271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.012 [2024-11-19 11:38:36.490279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.012 [2024-11-19 11:38:36.490457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.012 [2024-11-19 11:38:36.490635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.012 [2024-11-19 11:38:36.490645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.012 [2024-11-19 11:38:36.490652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.012 [2024-11-19 11:38:36.490659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.012 [2024-11-19 11:38:36.503039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.012 [2024-11-19 11:38:36.503444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.012 [2024-11-19 11:38:36.503462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.012 [2024-11-19 11:38:36.503470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.012 [2024-11-19 11:38:36.503647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.012 [2024-11-19 11:38:36.503827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.012 [2024-11-19 11:38:36.503837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.012 [2024-11-19 11:38:36.503844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.012 [2024-11-19 11:38:36.503851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.012 [2024-11-19 11:38:36.515870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.012 [2024-11-19 11:38:36.516290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.012 [2024-11-19 11:38:36.516306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.012 [2024-11-19 11:38:36.516315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.012 [2024-11-19 11:38:36.516478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.012 [2024-11-19 11:38:36.516641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.012 [2024-11-19 11:38:36.516651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.012 [2024-11-19 11:38:36.516657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.012 [2024-11-19 11:38:36.516664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.012 [2024-11-19 11:38:36.528740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.012 [2024-11-19 11:38:36.529133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.012 [2024-11-19 11:38:36.529150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.012 [2024-11-19 11:38:36.529159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.012 [2024-11-19 11:38:36.529323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.012 [2024-11-19 11:38:36.529487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.012 [2024-11-19 11:38:36.529496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.012 [2024-11-19 11:38:36.529503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.012 [2024-11-19 11:38:36.529509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.012 [2024-11-19 11:38:36.541579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.012 [2024-11-19 11:38:36.541997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.012 [2024-11-19 11:38:36.542040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.012 [2024-11-19 11:38:36.542065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.012 [2024-11-19 11:38:36.542646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.012 [2024-11-19 11:38:36.543244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.012 [2024-11-19 11:38:36.543282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.012 [2024-11-19 11:38:36.543289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.012 [2024-11-19 11:38:36.543295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.012 [2024-11-19 11:38:36.554479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.012 [2024-11-19 11:38:36.554889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.012 [2024-11-19 11:38:36.554933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.012 [2024-11-19 11:38:36.554979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.012 [2024-11-19 11:38:36.555560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.012 [2024-11-19 11:38:36.556122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.012 [2024-11-19 11:38:36.556132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.012 [2024-11-19 11:38:36.556139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.012 [2024-11-19 11:38:36.556146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.012 [2024-11-19 11:38:36.567312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.012 [2024-11-19 11:38:36.567736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.012 [2024-11-19 11:38:36.567779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.012 [2024-11-19 11:38:36.567803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.012 [2024-11-19 11:38:36.568174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.012 [2024-11-19 11:38:36.568339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.012 [2024-11-19 11:38:36.568348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-11-19 11:38:36.568355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-11-19 11:38:36.568362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-11-19 11:38:36.580129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-11-19 11:38:36.580548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-11-19 11:38:36.580564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-11-19 11:38:36.580572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.013 [2024-11-19 11:38:36.580735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.013 [2024-11-19 11:38:36.580898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-11-19 11:38:36.580908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-11-19 11:38:36.580914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-11-19 11:38:36.580920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-11-19 11:38:36.593123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-11-19 11:38:36.593527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-11-19 11:38:36.593571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-11-19 11:38:36.593594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.013 [2024-11-19 11:38:36.594042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.013 [2024-11-19 11:38:36.594209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-11-19 11:38:36.594217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-11-19 11:38:36.594224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-11-19 11:38:36.594230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-11-19 11:38:36.606033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-11-19 11:38:36.606444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-11-19 11:38:36.606490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-11-19 11:38:36.606514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.013 [2024-11-19 11:38:36.607042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.013 [2024-11-19 11:38:36.607208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-11-19 11:38:36.607217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-11-19 11:38:36.607223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-11-19 11:38:36.607230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-11-19 11:38:36.618849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-11-19 11:38:36.619141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-11-19 11:38:36.619158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-11-19 11:38:36.619167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.013 [2024-11-19 11:38:36.619330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.013 [2024-11-19 11:38:36.619494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-11-19 11:38:36.619504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-11-19 11:38:36.619510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-11-19 11:38:36.619516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-11-19 11:38:36.631742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-11-19 11:38:36.632123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-11-19 11:38:36.632171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-11-19 11:38:36.632196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.013 [2024-11-19 11:38:36.632776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.013 [2024-11-19 11:38:36.633372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-11-19 11:38:36.633400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-11-19 11:38:36.633431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-11-19 11:38:36.633452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-11-19 11:38:36.644710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-11-19 11:38:36.645154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-11-19 11:38:36.645171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-11-19 11:38:36.645180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.013 [2024-11-19 11:38:36.645357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.013 [2024-11-19 11:38:36.645520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-11-19 11:38:36.645529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-11-19 11:38:36.645535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-11-19 11:38:36.645542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-11-19 11:38:36.657543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-11-19 11:38:36.657889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-11-19 11:38:36.657937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-11-19 11:38:36.657976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.013 [2024-11-19 11:38:36.658493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.013 [2024-11-19 11:38:36.658657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-11-19 11:38:36.658667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-11-19 11:38:36.658673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-11-19 11:38:36.658680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-11-19 11:38:36.670410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-11-19 11:38:36.670861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-11-19 11:38:36.670906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-11-19 11:38:36.670929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.013 [2024-11-19 11:38:36.671464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.013 [2024-11-19 11:38:36.671630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-11-19 11:38:36.671640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-11-19 11:38:36.671647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-11-19 11:38:36.671654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-11-19 11:38:36.683269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-11-19 11:38:36.683684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-11-19 11:38:36.683699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-11-19 11:38:36.683707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.013 [2024-11-19 11:38:36.683870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.013 [2024-11-19 11:38:36.684041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.013 [2024-11-19 11:38:36.684052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.013 [2024-11-19 11:38:36.684058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.013 [2024-11-19 11:38:36.684066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.013 [2024-11-19 11:38:36.696136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.013 [2024-11-19 11:38:36.696561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.013 [2024-11-19 11:38:36.696603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.013 [2024-11-19 11:38:36.696627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.013 [2024-11-19 11:38:36.697219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.013 [2024-11-19 11:38:36.697412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-11-19 11:38:36.697422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-11-19 11:38:36.697428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-11-19 11:38:36.697436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-11-19 11:38:36.709036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-11-19 11:38:36.709459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-11-19 11:38:36.709512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-11-19 11:38:36.709536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.014 [2024-11-19 11:38:36.710132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.014 [2024-11-19 11:38:36.710693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-11-19 11:38:36.710703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-11-19 11:38:36.710709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-11-19 11:38:36.710716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-11-19 11:38:36.721937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-11-19 11:38:36.722338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-11-19 11:38:36.722354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-11-19 11:38:36.722364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.014 [2024-11-19 11:38:36.722529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.014 [2024-11-19 11:38:36.722693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-11-19 11:38:36.722702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-11-19 11:38:36.722709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-11-19 11:38:36.722716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-11-19 11:38:36.734787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-11-19 11:38:36.735201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-11-19 11:38:36.735219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-11-19 11:38:36.735227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.014 [2024-11-19 11:38:36.735400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.014 [2024-11-19 11:38:36.735573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-11-19 11:38:36.735584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-11-19 11:38:36.735591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-11-19 11:38:36.735597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-11-19 11:38:36.747932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-11-19 11:38:36.748343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-11-19 11:38:36.748360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-11-19 11:38:36.748369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.014 [2024-11-19 11:38:36.748547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.014 [2024-11-19 11:38:36.748726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-11-19 11:38:36.748736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-11-19 11:38:36.748743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-11-19 11:38:36.748749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-11-19 11:38:36.760791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-11-19 11:38:36.761156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-11-19 11:38:36.761174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-11-19 11:38:36.761181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.014 [2024-11-19 11:38:36.761344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.014 [2024-11-19 11:38:36.761511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-11-19 11:38:36.761520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-11-19 11:38:36.761527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-11-19 11:38:36.761533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-11-19 11:38:36.773962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-11-19 11:38:36.774389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-11-19 11:38:36.774407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-11-19 11:38:36.774415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.014 [2024-11-19 11:38:36.774594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.014 [2024-11-19 11:38:36.774773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-11-19 11:38:36.774783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-11-19 11:38:36.774790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-11-19 11:38:36.774797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.014 [2024-11-19 11:38:36.786996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.014 [2024-11-19 11:38:36.787429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.014 [2024-11-19 11:38:36.787446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.014 [2024-11-19 11:38:36.787454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.014 [2024-11-19 11:38:36.787626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.014 [2024-11-19 11:38:36.787801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.014 [2024-11-19 11:38:36.787811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.014 [2024-11-19 11:38:36.787818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.014 [2024-11-19 11:38:36.787824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.311 [2024-11-19 11:38:36.800186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.311 [2024-11-19 11:38:36.800609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.311 [2024-11-19 11:38:36.800626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.311 [2024-11-19 11:38:36.800634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.311 [2024-11-19 11:38:36.800806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.311 [2024-11-19 11:38:36.800985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.311 [2024-11-19 11:38:36.800996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.311 [2024-11-19 11:38:36.801007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.311 [2024-11-19 11:38:36.801014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.311 [2024-11-19 11:38:36.812994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.311 [2024-11-19 11:38:36.813387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.311 [2024-11-19 11:38:36.813403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.311 [2024-11-19 11:38:36.813411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.311 [2024-11-19 11:38:36.813574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.312 [2024-11-19 11:38:36.813737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.312 [2024-11-19 11:38:36.813746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.312 [2024-11-19 11:38:36.813752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.312 [2024-11-19 11:38:36.813759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.312 [2024-11-19 11:38:36.825778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.312 [2024-11-19 11:38:36.826177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.312 [2024-11-19 11:38:36.826194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.312 [2024-11-19 11:38:36.826202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.312 [2024-11-19 11:38:36.826364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.312 [2024-11-19 11:38:36.826527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.312 [2024-11-19 11:38:36.826537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.312 [2024-11-19 11:38:36.826543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.312 [2024-11-19 11:38:36.826550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.312 [2024-11-19 11:38:36.838620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.312 [2024-11-19 11:38:36.839035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.312 [2024-11-19 11:38:36.839052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.312 [2024-11-19 11:38:36.839060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.312 [2024-11-19 11:38:36.839223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.312 [2024-11-19 11:38:36.839387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.312 [2024-11-19 11:38:36.839396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.312 [2024-11-19 11:38:36.839403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.312 [2024-11-19 11:38:36.839410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.312 [2024-11-19 11:38:36.851547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.312 [2024-11-19 11:38:36.851994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.312 [2024-11-19 11:38:36.852039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.312 [2024-11-19 11:38:36.852062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.312 [2024-11-19 11:38:36.852629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.312 [2024-11-19 11:38:36.852794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.312 [2024-11-19 11:38:36.852803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.312 [2024-11-19 11:38:36.852809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.312 [2024-11-19 11:38:36.852816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.312 [2024-11-19 11:38:36.864437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.312 [2024-11-19 11:38:36.864850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.312 [2024-11-19 11:38:36.864894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.312 [2024-11-19 11:38:36.864917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.312 [2024-11-19 11:38:36.865513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.312 [2024-11-19 11:38:36.865883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.312 [2024-11-19 11:38:36.865893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.312 [2024-11-19 11:38:36.865899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.312 [2024-11-19 11:38:36.865906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.312 [2024-11-19 11:38:36.877301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.312 [2024-11-19 11:38:36.877724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.312 [2024-11-19 11:38:36.877768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.312 [2024-11-19 11:38:36.877792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.312 [2024-11-19 11:38:36.878384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.312 [2024-11-19 11:38:36.878931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.312 [2024-11-19 11:38:36.878940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.312 [2024-11-19 11:38:36.878952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.312 [2024-11-19 11:38:36.878959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.312 [2024-11-19 11:38:36.890256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.312 [2024-11-19 11:38:36.890669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.312 [2024-11-19 11:38:36.890685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.312 [2024-11-19 11:38:36.890696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.312 [2024-11-19 11:38:36.890860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.312 [2024-11-19 11:38:36.891048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.312 [2024-11-19 11:38:36.891059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.312 [2024-11-19 11:38:36.891066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.312 [2024-11-19 11:38:36.891073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.312 5495.00 IOPS, 21.46 MiB/s [2024-11-19T10:38:37.093Z] [2024-11-19 11:38:36.904364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.312 [2024-11-19 11:38:36.904700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.312 [2024-11-19 11:38:36.904717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.312 [2024-11-19 11:38:36.904725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.312 [2024-11-19 11:38:36.904889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.312 [2024-11-19 11:38:36.905060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.312 [2024-11-19 11:38:36.905071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.312 [2024-11-19 11:38:36.905077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.312 [2024-11-19 11:38:36.905084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.312 [2024-11-19 11:38:36.917152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.312 [2024-11-19 11:38:36.917561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.312 [2024-11-19 11:38:36.917579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.312 [2024-11-19 11:38:36.917588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.312 [2024-11-19 11:38:36.917750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.312 [2024-11-19 11:38:36.917914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.312 [2024-11-19 11:38:36.917925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.312 [2024-11-19 11:38:36.917931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.312 [2024-11-19 11:38:36.917938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.312 [2024-11-19 11:38:36.930013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.312 [2024-11-19 11:38:36.930345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.312 [2024-11-19 11:38:36.930361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.312 [2024-11-19 11:38:36.930370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.312 [2024-11-19 11:38:36.930532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.312 [2024-11-19 11:38:36.930698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.312 [2024-11-19 11:38:36.930708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.312 [2024-11-19 11:38:36.930714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.312 [2024-11-19 11:38:36.930721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.312 [2024-11-19 11:38:36.942885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.312 [2024-11-19 11:38:36.943301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.313 [2024-11-19 11:38:36.943339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.313 [2024-11-19 11:38:36.943366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.313 [2024-11-19 11:38:36.943908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.313 [2024-11-19 11:38:36.944078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.313 [2024-11-19 11:38:36.944088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.313 [2024-11-19 11:38:36.944094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.313 [2024-11-19 11:38:36.944101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.313 [2024-11-19 11:38:36.955748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.313 [2024-11-19 11:38:36.956109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.313 [2024-11-19 11:38:36.956154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.313 [2024-11-19 11:38:36.956178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.313 [2024-11-19 11:38:36.956760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.313 [2024-11-19 11:38:36.957242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.313 [2024-11-19 11:38:36.957251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.313 [2024-11-19 11:38:36.957259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.313 [2024-11-19 11:38:36.957266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.313 [2024-11-19 11:38:36.968701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.313 [2024-11-19 11:38:36.969118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.313 [2024-11-19 11:38:36.969134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.313 [2024-11-19 11:38:36.969165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.313 [2024-11-19 11:38:36.969746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.313 [2024-11-19 11:38:36.970081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.313 [2024-11-19 11:38:36.970091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.313 [2024-11-19 11:38:36.970102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.313 [2024-11-19 11:38:36.970109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.313 [2024-11-19 11:38:36.981564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.313 [2024-11-19 11:38:36.981891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.313 [2024-11-19 11:38:36.981908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.313 [2024-11-19 11:38:36.981915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.313 [2024-11-19 11:38:36.982086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.313 [2024-11-19 11:38:36.982250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.313 [2024-11-19 11:38:36.982259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.313 [2024-11-19 11:38:36.982265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.313 [2024-11-19 11:38:36.982271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.313 [2024-11-19 11:38:36.994362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.313 [2024-11-19 11:38:36.994790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.313 [2024-11-19 11:38:36.994807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.313 [2024-11-19 11:38:36.994815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.313 [2024-11-19 11:38:36.994993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.313 [2024-11-19 11:38:36.995167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.313 [2024-11-19 11:38:36.995177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.313 [2024-11-19 11:38:36.995183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.313 [2024-11-19 11:38:36.995190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.313 [2024-11-19 11:38:37.007528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.313 [2024-11-19 11:38:37.007938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.313 [2024-11-19 11:38:37.007961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.313 [2024-11-19 11:38:37.007971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.313 [2024-11-19 11:38:37.008149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.313 [2024-11-19 11:38:37.008334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.313 [2024-11-19 11:38:37.008344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.313 [2024-11-19 11:38:37.008350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.313 [2024-11-19 11:38:37.008357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.313 [2024-11-19 11:38:37.020530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.313 [2024-11-19 11:38:37.020935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.313 [2024-11-19 11:38:37.020958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.313 [2024-11-19 11:38:37.020966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.313 [2024-11-19 11:38:37.021139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.313 [2024-11-19 11:38:37.021313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.313 [2024-11-19 11:38:37.021323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.313 [2024-11-19 11:38:37.021330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.313 [2024-11-19 11:38:37.021337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.313 [2024-11-19 11:38:37.033434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.313 [2024-11-19 11:38:37.033851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.313 [2024-11-19 11:38:37.033891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.313 [2024-11-19 11:38:37.033917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.313 [2024-11-19 11:38:37.034465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.313 [2024-11-19 11:38:37.034630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.313 [2024-11-19 11:38:37.034640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.313 [2024-11-19 11:38:37.034646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.313 [2024-11-19 11:38:37.034653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.313 [2024-11-19 11:38:37.046447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.313 [2024-11-19 11:38:37.046815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.313 [2024-11-19 11:38:37.046861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.313 [2024-11-19 11:38:37.046884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.313 [2024-11-19 11:38:37.047354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.313 [2024-11-19 11:38:37.047519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.313 [2024-11-19 11:38:37.047528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.313 [2024-11-19 11:38:37.047534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.313 [2024-11-19 11:38:37.047541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.313 [2024-11-19 11:38:37.059496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.313 [2024-11-19 11:38:37.059915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.313 [2024-11-19 11:38:37.059969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.313 [2024-11-19 11:38:37.060001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.313 [2024-11-19 11:38:37.060406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.313 [2024-11-19 11:38:37.060571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.313 [2024-11-19 11:38:37.060581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.313 [2024-11-19 11:38:37.060587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.313 [2024-11-19 11:38:37.060593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.313 [2024-11-19 11:38:37.072540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.314 [2024-11-19 11:38:37.072923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.314 [2024-11-19 11:38:37.072940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.314 [2024-11-19 11:38:37.072952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.314 [2024-11-19 11:38:37.073125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.314 [2024-11-19 11:38:37.073297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.314 [2024-11-19 11:38:37.073307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.314 [2024-11-19 11:38:37.073314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.314 [2024-11-19 11:38:37.073321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.314 [2024-11-19 11:38:37.085507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.314 [2024-11-19 11:38:37.085842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.314 [2024-11-19 11:38:37.085859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.314 [2024-11-19 11:38:37.085867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.314 [2024-11-19 11:38:37.086046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.314 [2024-11-19 11:38:37.086219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.314 [2024-11-19 11:38:37.086229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.314 [2024-11-19 11:38:37.086235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.314 [2024-11-19 11:38:37.086242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.574 [2024-11-19 11:38:37.098456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.574 [2024-11-19 11:38:37.098859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.574 [2024-11-19 11:38:37.098876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.574 [2024-11-19 11:38:37.098885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.574 [2024-11-19 11:38:37.099053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.574 [2024-11-19 11:38:37.099222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.574 [2024-11-19 11:38:37.099231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.574 [2024-11-19 11:38:37.099238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.574 [2024-11-19 11:38:37.099244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.574 [2024-11-19 11:38:37.111276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.574 [2024-11-19 11:38:37.111626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.574 [2024-11-19 11:38:37.111669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.574 [2024-11-19 11:38:37.111693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.574 [2024-11-19 11:38:37.112219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.574 [2024-11-19 11:38:37.112384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.575 [2024-11-19 11:38:37.112394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.575 [2024-11-19 11:38:37.112400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.575 [2024-11-19 11:38:37.112407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.575 [2024-11-19 11:38:37.124180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.575 [2024-11-19 11:38:37.124546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.575 [2024-11-19 11:38:37.124589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.575 [2024-11-19 11:38:37.124612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.575 [2024-11-19 11:38:37.125203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.575 [2024-11-19 11:38:37.125367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.575 [2024-11-19 11:38:37.125376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.575 [2024-11-19 11:38:37.125382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.575 [2024-11-19 11:38:37.125389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.575 [2024-11-19 11:38:37.137007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.575 [2024-11-19 11:38:37.137423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.575 [2024-11-19 11:38:37.137439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.575 [2024-11-19 11:38:37.137447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.575 [2024-11-19 11:38:37.137610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.575 [2024-11-19 11:38:37.137774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.575 [2024-11-19 11:38:37.137783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.575 [2024-11-19 11:38:37.137794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.575 [2024-11-19 11:38:37.137801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.575 [2024-11-19 11:38:37.149861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.575 [2024-11-19 11:38:37.150204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.575 [2024-11-19 11:38:37.150220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.575 [2024-11-19 11:38:37.150229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.575 [2024-11-19 11:38:37.150392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.575 [2024-11-19 11:38:37.150555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.575 [2024-11-19 11:38:37.150564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.575 [2024-11-19 11:38:37.150570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.575 [2024-11-19 11:38:37.150578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.575 [2024-11-19 11:38:37.162750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.575 [2024-11-19 11:38:37.163166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.575 [2024-11-19 11:38:37.163183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.575 [2024-11-19 11:38:37.163191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.575 [2024-11-19 11:38:37.163355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.575 [2024-11-19 11:38:37.163518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.575 [2024-11-19 11:38:37.163527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.575 [2024-11-19 11:38:37.163534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.575 [2024-11-19 11:38:37.163540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.575 [2024-11-19 11:38:37.175620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.575 [2024-11-19 11:38:37.176081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.575 [2024-11-19 11:38:37.176126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.575 [2024-11-19 11:38:37.176150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.575 [2024-11-19 11:38:37.176693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.575 [2024-11-19 11:38:37.176858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.575 [2024-11-19 11:38:37.176867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.575 [2024-11-19 11:38:37.176876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.575 [2024-11-19 11:38:37.176883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.575 [2024-11-19 11:38:37.188456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.575 [2024-11-19 11:38:37.188737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.575 [2024-11-19 11:38:37.188754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.575 [2024-11-19 11:38:37.188762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.575 [2024-11-19 11:38:37.188925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.575 [2024-11-19 11:38:37.189093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.575 [2024-11-19 11:38:37.189102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.575 [2024-11-19 11:38:37.189109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.575 [2024-11-19 11:38:37.189115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.575 [2024-11-19 11:38:37.201346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.575 [2024-11-19 11:38:37.201679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.575 [2024-11-19 11:38:37.201724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.575 [2024-11-19 11:38:37.201748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.575 [2024-11-19 11:38:37.202340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.575 [2024-11-19 11:38:37.202762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.575 [2024-11-19 11:38:37.202772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.575 [2024-11-19 11:38:37.202778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.575 [2024-11-19 11:38:37.202784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.575 [2024-11-19 11:38:37.214247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.575 [2024-11-19 11:38:37.214663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.575 [2024-11-19 11:38:37.214702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.575 [2024-11-19 11:38:37.214727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.575 [2024-11-19 11:38:37.215323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.575 [2024-11-19 11:38:37.215563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.575 [2024-11-19 11:38:37.215572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.575 [2024-11-19 11:38:37.215578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.575 [2024-11-19 11:38:37.215585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.575 [2024-11-19 11:38:37.227151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.575 [2024-11-19 11:38:37.227406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.575 [2024-11-19 11:38:37.227422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.575 [2024-11-19 11:38:37.227434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.575 [2024-11-19 11:38:37.227598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.575 [2024-11-19 11:38:37.227761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.575 [2024-11-19 11:38:37.227770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.575 [2024-11-19 11:38:37.227777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.575 [2024-11-19 11:38:37.227783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.575 [2024-11-19 11:38:37.240292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.575 [2024-11-19 11:38:37.240585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.575 [2024-11-19 11:38:37.240601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.575 [2024-11-19 11:38:37.240609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.575 [2024-11-19 11:38:37.240781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.575 [2024-11-19 11:38:37.240958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.576 [2024-11-19 11:38:37.240968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.576 [2024-11-19 11:38:37.240976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.576 [2024-11-19 11:38:37.240982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.576 [2024-11-19 11:38:37.253157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.576 [2024-11-19 11:38:37.253521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.576 [2024-11-19 11:38:37.253538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.576 [2024-11-19 11:38:37.253547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.576 [2024-11-19 11:38:37.253719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.576 [2024-11-19 11:38:37.253893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.576 [2024-11-19 11:38:37.253902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.576 [2024-11-19 11:38:37.253909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.576 [2024-11-19 11:38:37.253915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.576 [2024-11-19 11:38:37.266265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.576 [2024-11-19 11:38:37.266558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.576 [2024-11-19 11:38:37.266575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.576 [2024-11-19 11:38:37.266582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.576 [2024-11-19 11:38:37.266760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.576 [2024-11-19 11:38:37.266942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.576 [2024-11-19 11:38:37.266958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.576 [2024-11-19 11:38:37.266965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.576 [2024-11-19 11:38:37.266971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.576 [2024-11-19 11:38:37.279288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.576 [2024-11-19 11:38:37.279619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.576 [2024-11-19 11:38:37.279636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.576 [2024-11-19 11:38:37.279644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.576 [2024-11-19 11:38:37.279817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.576 [2024-11-19 11:38:37.279996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.576 [2024-11-19 11:38:37.280006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.576 [2024-11-19 11:38:37.280013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.576 [2024-11-19 11:38:37.280020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.576 [2024-11-19 11:38:37.292386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.576 [2024-11-19 11:38:37.292675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.576 [2024-11-19 11:38:37.292692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.576 [2024-11-19 11:38:37.292700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.576 [2024-11-19 11:38:37.292878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.576 [2024-11-19 11:38:37.293062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.576 [2024-11-19 11:38:37.293072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.576 [2024-11-19 11:38:37.293079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.576 [2024-11-19 11:38:37.293086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.576 [2024-11-19 11:38:37.305438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.576 [2024-11-19 11:38:37.305846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.576 [2024-11-19 11:38:37.305863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.576 [2024-11-19 11:38:37.305871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.576 [2024-11-19 11:38:37.306055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.576 [2024-11-19 11:38:37.306233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.576 [2024-11-19 11:38:37.306244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.576 [2024-11-19 11:38:37.306254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.576 [2024-11-19 11:38:37.306262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.576 [2024-11-19 11:38:37.318602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.576 [2024-11-19 11:38:37.318891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.576 [2024-11-19 11:38:37.318908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.576 [2024-11-19 11:38:37.318917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.576 [2024-11-19 11:38:37.319100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.576 [2024-11-19 11:38:37.319279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.576 [2024-11-19 11:38:37.319289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.576 [2024-11-19 11:38:37.319297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.576 [2024-11-19 11:38:37.319305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.576 [2024-11-19 11:38:37.331654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.576 [2024-11-19 11:38:37.331993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.576 [2024-11-19 11:38:37.332010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.576 [2024-11-19 11:38:37.332018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.576 [2024-11-19 11:38:37.332197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.576 [2024-11-19 11:38:37.332377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.576 [2024-11-19 11:38:37.332387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.576 [2024-11-19 11:38:37.332394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.576 [2024-11-19 11:38:37.332402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.576 [2024-11-19 11:38:37.344750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.576 [2024-11-19 11:38:37.345162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.576 [2024-11-19 11:38:37.345180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.576 [2024-11-19 11:38:37.345189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.576 [2024-11-19 11:38:37.345368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.576 [2024-11-19 11:38:37.345548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.576 [2024-11-19 11:38:37.345557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.576 [2024-11-19 11:38:37.345564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.576 [2024-11-19 11:38:37.345571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.837 [2024-11-19 11:38:37.357919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.837 [2024-11-19 11:38:37.358356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.837 [2024-11-19 11:38:37.358374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.837 [2024-11-19 11:38:37.358382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.837 [2024-11-19 11:38:37.358560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.837 [2024-11-19 11:38:37.358739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.837 [2024-11-19 11:38:37.358749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.837 [2024-11-19 11:38:37.358756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.837 [2024-11-19 11:38:37.358762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.837 [2024-11-19 11:38:37.371081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.837 [2024-11-19 11:38:37.371491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.837 [2024-11-19 11:38:37.371508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.837 [2024-11-19 11:38:37.371516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.837 [2024-11-19 11:38:37.371694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.837 [2024-11-19 11:38:37.371874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.837 [2024-11-19 11:38:37.371884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.837 [2024-11-19 11:38:37.371891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.837 [2024-11-19 11:38:37.371898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.837 [2024-11-19 11:38:37.384251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.837 [2024-11-19 11:38:37.384681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.837 [2024-11-19 11:38:37.384697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.837 [2024-11-19 11:38:37.384705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.837 [2024-11-19 11:38:37.384882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.837 [2024-11-19 11:38:37.385067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.837 [2024-11-19 11:38:37.385077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.837 [2024-11-19 11:38:37.385084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.837 [2024-11-19 11:38:37.385090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.837 [2024-11-19 11:38:37.397445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.837 [2024-11-19 11:38:37.397879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.837 [2024-11-19 11:38:37.397896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.837 [2024-11-19 11:38:37.397907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.837 [2024-11-19 11:38:37.398090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.837 [2024-11-19 11:38:37.398270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.837 [2024-11-19 11:38:37.398280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.837 [2024-11-19 11:38:37.398287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.837 [2024-11-19 11:38:37.398294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.837 [2024-11-19 11:38:37.410481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.837 [2024-11-19 11:38:37.410912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.837 [2024-11-19 11:38:37.410930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.837 [2024-11-19 11:38:37.410938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.837 [2024-11-19 11:38:37.411120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.837 [2024-11-19 11:38:37.411299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.837 [2024-11-19 11:38:37.411309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.837 [2024-11-19 11:38:37.411316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.837 [2024-11-19 11:38:37.411322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2415069 Killed "${NVMF_APP[@]}" "$@" 00:27:23.837 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:23.837 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:23.837 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:23.837 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:23.837 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.837 [2024-11-19 11:38:37.423667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.837 [2024-11-19 11:38:37.423943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.837 [2024-11-19 11:38:37.423967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.837 [2024-11-19 11:38:37.423975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.837 [2024-11-19 11:38:37.424154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.837 [2024-11-19 11:38:37.424333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.837 [2024-11-19 11:38:37.424343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.837 [2024-11-19 11:38:37.424350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.838 [2024-11-19 11:38:37.424356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.838 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2416376 00:27:23.838 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2416376 00:27:23.838 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:23.838 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2416376 ']' 00:27:23.838 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.838 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.838 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.838 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.838 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.838 [2024-11-19 11:38:37.436716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.838 [2024-11-19 11:38:37.437041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.838 [2024-11-19 11:38:37.437059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.838 [2024-11-19 11:38:37.437066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.838 [2024-11-19 11:38:37.437244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.838 [2024-11-19 11:38:37.437421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.838 [2024-11-19 11:38:37.437431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.838 [2024-11-19 11:38:37.437438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.838 [2024-11-19 11:38:37.437445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.838 [2024-11-19 11:38:37.449806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.838 [2024-11-19 11:38:37.450104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.838 [2024-11-19 11:38:37.450121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.838 [2024-11-19 11:38:37.450130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.838 [2024-11-19 11:38:37.450308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.838 [2024-11-19 11:38:37.450488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.838 [2024-11-19 11:38:37.450497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.838 [2024-11-19 11:38:37.450504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.838 [2024-11-19 11:38:37.450512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.838 [2024-11-19 11:38:37.462892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.838 [2024-11-19 11:38:37.463286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.838 [2024-11-19 11:38:37.463304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.838 [2024-11-19 11:38:37.463312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.838 [2024-11-19 11:38:37.463496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.838 [2024-11-19 11:38:37.463675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.838 [2024-11-19 11:38:37.463685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.838 [2024-11-19 11:38:37.463692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.838 [2024-11-19 11:38:37.463698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.838 [2024-11-19 11:38:37.470832] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:23.838 [2024-11-19 11:38:37.470871] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.838 [2024-11-19 11:38:37.475894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.838 [2024-11-19 11:38:37.476232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.838 [2024-11-19 11:38:37.476250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.838 [2024-11-19 11:38:37.476258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.838 [2024-11-19 11:38:37.476451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.838 [2024-11-19 11:38:37.476631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.838 [2024-11-19 11:38:37.476641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.838 [2024-11-19 11:38:37.476648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.838 [2024-11-19 11:38:37.476655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.838 [2024-11-19 11:38:37.488970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.838 [2024-11-19 11:38:37.489260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.838 [2024-11-19 11:38:37.489277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.838 [2024-11-19 11:38:37.489286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.838 [2024-11-19 11:38:37.489460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.838 [2024-11-19 11:38:37.489634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.838 [2024-11-19 11:38:37.489643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.838 [2024-11-19 11:38:37.489650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.838 [2024-11-19 11:38:37.489657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.838 [2024-11-19 11:38:37.502025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.838 [2024-11-19 11:38:37.502363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.838 [2024-11-19 11:38:37.502381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.838 [2024-11-19 11:38:37.502390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.838 [2024-11-19 11:38:37.502576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.838 [2024-11-19 11:38:37.502756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.838 [2024-11-19 11:38:37.502766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.838 [2024-11-19 11:38:37.502773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.838 [2024-11-19 11:38:37.502781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.838 [2024-11-19 11:38:37.515138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.838 [2024-11-19 11:38:37.515479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.838 [2024-11-19 11:38:37.515499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.838 [2024-11-19 11:38:37.515507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.838 [2024-11-19 11:38:37.515685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.838 [2024-11-19 11:38:37.515863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.838 [2024-11-19 11:38:37.515873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.838 [2024-11-19 11:38:37.515880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.838 [2024-11-19 11:38:37.515888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.838 [2024-11-19 11:38:37.528241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.838 [2024-11-19 11:38:37.528674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.838 [2024-11-19 11:38:37.528691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.838 [2024-11-19 11:38:37.528700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.838 [2024-11-19 11:38:37.528878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.838 [2024-11-19 11:38:37.529062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.838 [2024-11-19 11:38:37.529072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.838 [2024-11-19 11:38:37.529079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.838 [2024-11-19 11:38:37.529086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.838 [2024-11-19 11:38:37.541437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.838 [2024-11-19 11:38:37.541723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.838 [2024-11-19 11:38:37.541740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.838 [2024-11-19 11:38:37.541749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.838 [2024-11-19 11:38:37.541926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.839 [2024-11-19 11:38:37.542111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.839 [2024-11-19 11:38:37.542126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.839 [2024-11-19 11:38:37.542133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.839 [2024-11-19 11:38:37.542140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.839 [2024-11-19 11:38:37.550720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:23.839 [2024-11-19 11:38:37.554401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.839 [2024-11-19 11:38:37.554718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.839 [2024-11-19 11:38:37.554736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.839 [2024-11-19 11:38:37.554744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.839 [2024-11-19 11:38:37.554917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.839 [2024-11-19 11:38:37.555095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.839 [2024-11-19 11:38:37.555105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.839 [2024-11-19 11:38:37.555112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.839 [2024-11-19 11:38:37.555119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.839 [2024-11-19 11:38:37.567566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.839 [2024-11-19 11:38:37.567908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.839 [2024-11-19 11:38:37.567925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.839 [2024-11-19 11:38:37.567933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.839 [2024-11-19 11:38:37.568111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.839 [2024-11-19 11:38:37.568285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.839 [2024-11-19 11:38:37.568295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.839 [2024-11-19 11:38:37.568303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.839 [2024-11-19 11:38:37.568310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.839 [2024-11-19 11:38:37.580610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.839 [2024-11-19 11:38:37.580971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.839 [2024-11-19 11:38:37.580989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.839 [2024-11-19 11:38:37.580997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.839 [2024-11-19 11:38:37.581170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.839 [2024-11-19 11:38:37.581343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.839 [2024-11-19 11:38:37.581354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.839 [2024-11-19 11:38:37.581366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.839 [2024-11-19 11:38:37.581373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.839 [2024-11-19 11:38:37.593666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.839 [2024-11-19 11:38:37.593689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting contro[2024-11-19 11:38:37.593694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of eventsller 00:27:23.839 at runtime. 00:27:23.839 [2024-11-19 11:38:37.593706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.839 [2024-11-19 11:38:37.593712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.839 [2024-11-19 11:38:37.593717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.839 [2024-11-19 11:38:37.593980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.839 [2024-11-19 11:38:37.593997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.839 [2024-11-19 11:38:37.594006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.839 [2024-11-19 11:38:37.594180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.839 [2024-11-19 11:38:37.594351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.839 [2024-11-19 11:38:37.594360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.839 [2024-11-19 11:38:37.594367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.839 [2024-11-19 11:38:37.594374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.839 [2024-11-19 11:38:37.595005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:23.839 [2024-11-19 11:38:37.595118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.839 [2024-11-19 11:38:37.595120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:23.839 [2024-11-19 11:38:37.606745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.839 [2024-11-19 11:38:37.607112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.839 [2024-11-19 11:38:37.607134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:23.839 [2024-11-19 11:38:37.607143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:23.839 [2024-11-19 11:38:37.607322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:23.839 [2024-11-19 11:38:37.607504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.839 [2024-11-19 11:38:37.607514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.839 [2024-11-19 11:38:37.607522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.839 [2024-11-19 11:38:37.607530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.100 [2024-11-19 11:38:37.619930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.100 [2024-11-19 11:38:37.620300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.100 [2024-11-19 11:38:37.620320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.100 [2024-11-19 11:38:37.620330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.100 [2024-11-19 11:38:37.620515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.100 [2024-11-19 11:38:37.620696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.100 [2024-11-19 11:38:37.620705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.100 [2024-11-19 11:38:37.620713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.100 [2024-11-19 11:38:37.620721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.100 [2024-11-19 11:38:37.633110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.100 [2024-11-19 11:38:37.633546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.100 [2024-11-19 11:38:37.633567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.100 [2024-11-19 11:38:37.633576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.100 [2024-11-19 11:38:37.633757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.100 [2024-11-19 11:38:37.633939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.100 [2024-11-19 11:38:37.633956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.100 [2024-11-19 11:38:37.633964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.100 [2024-11-19 11:38:37.633972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.100 [2024-11-19 11:38:37.646319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.100 [2024-11-19 11:38:37.646774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.100 [2024-11-19 11:38:37.646795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.100 [2024-11-19 11:38:37.646804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.100 [2024-11-19 11:38:37.647003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.100 [2024-11-19 11:38:37.647187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.100 [2024-11-19 11:38:37.647196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.100 [2024-11-19 11:38:37.647204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.100 [2024-11-19 11:38:37.647212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.100 [2024-11-19 11:38:37.659390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.100 [2024-11-19 11:38:37.659759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.100 [2024-11-19 11:38:37.659778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.100 [2024-11-19 11:38:37.659787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.100 [2024-11-19 11:38:37.659970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.100 [2024-11-19 11:38:37.660148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.100 [2024-11-19 11:38:37.660163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.100 [2024-11-19 11:38:37.660170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.100 [2024-11-19 11:38:37.660177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.100 [2024-11-19 11:38:37.672536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.100 [2024-11-19 11:38:37.672899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.100 [2024-11-19 11:38:37.672916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.100 [2024-11-19 11:38:37.672925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.100 [2024-11-19 11:38:37.673110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.100 [2024-11-19 11:38:37.673289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.100 [2024-11-19 11:38:37.673300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.100 [2024-11-19 11:38:37.673307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.100 [2024-11-19 11:38:37.673315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.100 [2024-11-19 11:38:37.685660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.100 [2024-11-19 11:38:37.686093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.100 [2024-11-19 11:38:37.686111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.100 [2024-11-19 11:38:37.686120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.100 [2024-11-19 11:38:37.686299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.100 [2024-11-19 11:38:37.686479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.100 [2024-11-19 11:38:37.686489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.100 [2024-11-19 11:38:37.686497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.100 [2024-11-19 11:38:37.686504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.100 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.100 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:24.100 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:24.100 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:24.100 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.100 [2024-11-19 11:38:37.698853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.100 [2024-11-19 11:38:37.699250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.100 [2024-11-19 11:38:37.699268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.100 [2024-11-19 11:38:37.699278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.100 [2024-11-19 11:38:37.699458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.100 [2024-11-19 11:38:37.699643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.100 [2024-11-19 11:38:37.699653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.100 [2024-11-19 11:38:37.699662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.100 [2024-11-19 11:38:37.699670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.100 [2024-11-19 11:38:37.712034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.100 [2024-11-19 11:38:37.712445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.100 [2024-11-19 11:38:37.712462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.100 [2024-11-19 11:38:37.712470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.100 [2024-11-19 11:38:37.712649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.100 [2024-11-19 11:38:37.712827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.100 [2024-11-19 11:38:37.712837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.100 [2024-11-19 11:38:37.712844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.100 [2024-11-19 11:38:37.712850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.100 [2024-11-19 11:38:37.725216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.100 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.100 [2024-11-19 11:38:37.725629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.100 [2024-11-19 11:38:37.725650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.100 [2024-11-19 11:38:37.725660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:24.101 [2024-11-19 11:38:37.725838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.101 [2024-11-19 11:38:37.726024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.101 [2024-11-19 11:38:37.726035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.101 [2024-11-19 11:38:37.726042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.101 [2024-11-19 11:38:37.726048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.101 [2024-11-19 11:38:37.730479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.101 [2024-11-19 11:38:37.738396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.101 [2024-11-19 11:38:37.738743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.101 [2024-11-19 11:38:37.738761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.101 [2024-11-19 11:38:37.738770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.101 [2024-11-19 11:38:37.738954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.101 [2024-11-19 11:38:37.739134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.101 [2024-11-19 11:38:37.739144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.101 [2024-11-19 11:38:37.739151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.101 [2024-11-19 11:38:37.739157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.101 [2024-11-19 11:38:37.751516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.101 [2024-11-19 11:38:37.751883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.101 [2024-11-19 11:38:37.751901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.101 [2024-11-19 11:38:37.751909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.101 [2024-11-19 11:38:37.752094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.101 [2024-11-19 11:38:37.752273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.101 [2024-11-19 11:38:37.752283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.101 [2024-11-19 11:38:37.752289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.101 [2024-11-19 11:38:37.752297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.101 [2024-11-19 11:38:37.764652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.101 [2024-11-19 11:38:37.765020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.101 [2024-11-19 11:38:37.765038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.101 [2024-11-19 11:38:37.765046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.101 [2024-11-19 11:38:37.765224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.101 [2024-11-19 11:38:37.765403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.101 [2024-11-19 11:38:37.765413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.101 [2024-11-19 11:38:37.765419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.101 [2024-11-19 11:38:37.765426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.101 Malloc0 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.101 [2024-11-19 11:38:37.777786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.101 [2024-11-19 11:38:37.778160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.101 [2024-11-19 11:38:37.778178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.101 [2024-11-19 11:38:37.778187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.101 [2024-11-19 11:38:37.778365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.101 [2024-11-19 11:38:37.778543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.101 [2024-11-19 11:38:37.778552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.101 [2024-11-19 11:38:37.778559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.101 [2024-11-19 11:38:37.778566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.101 [2024-11-19 11:38:37.790909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.101 [2024-11-19 11:38:37.791350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.101 [2024-11-19 11:38:37.791367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee500 with addr=10.0.0.2, port=4420 00:27:24.101 [2024-11-19 11:38:37.791376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee500 is same with the state(6) to be set 00:27:24.101 [2024-11-19 11:38:37.791555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee500 (9): Bad file descriptor 00:27:24.101 [2024-11-19 11:38:37.791733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.101 [2024-11-19 11:38:37.791742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.101 [2024-11-19 11:38:37.791749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.101 [2024-11-19 11:38:37.791755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.101 [2024-11-19 11:38:37.797810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.101 11:38:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2415331 00:27:24.101 [2024-11-19 11:38:37.803944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.101 [2024-11-19 11:38:37.874832] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:25.298 4622.83 IOPS, 18.06 MiB/s [2024-11-19T10:38:40.017Z] 5522.86 IOPS, 21.57 MiB/s [2024-11-19T10:38:40.952Z] 6217.25 IOPS, 24.29 MiB/s [2024-11-19T10:38:42.332Z] 6741.44 IOPS, 26.33 MiB/s [2024-11-19T10:38:43.270Z] 7175.60 IOPS, 28.03 MiB/s [2024-11-19T10:38:44.208Z] 7532.73 IOPS, 29.42 MiB/s [2024-11-19T10:38:45.146Z] 7830.42 IOPS, 30.59 MiB/s [2024-11-19T10:38:46.083Z] 8084.54 IOPS, 31.58 MiB/s [2024-11-19T10:38:47.022Z] 8308.93 IOPS, 32.46 MiB/s [2024-11-19T10:38:47.022Z] 8493.53 IOPS, 33.18 MiB/s 00:27:33.241 Latency(us) 00:27:33.241 [2024-11-19T10:38:47.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.241 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:33.241 Verification LBA range: start 0x0 length 0x4000 00:27:33.241 Nvme1n1 : 15.01 8495.76 33.19 11090.43 0.00 6515.54 445.22 15272.74 00:27:33.241 [2024-11-19T10:38:47.022Z] =================================================================================================================== 00:27:33.241 [2024-11-19T10:38:47.022Z] Total : 8495.76 33.19 11090.43 0.00 6515.54 445.22 15272.74 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:33.500 rmmod nvme_tcp 00:27:33.500 rmmod nvme_fabrics 00:27:33.500 rmmod nvme_keyring 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2416376 ']' 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2416376 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2416376 ']' 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2416376 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416376 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416376' 00:27:33.500 killing process with pid 2416376 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2416376 00:27:33.500 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2416376 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.760 11:38:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:36.297 00:27:36.297 real 0m26.157s 00:27:36.297 user 1m0.852s 00:27:36.297 sys 0m6.942s 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.297 ************************************ 00:27:36.297 END TEST nvmf_bdevperf 00:27:36.297 ************************************ 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.297 ************************************ 00:27:36.297 START TEST nvmf_target_disconnect 00:27:36.297 ************************************ 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:36.297 * Looking for test storage... 00:27:36.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.297 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:36.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.298 --rc genhtml_branch_coverage=1 00:27:36.298 --rc genhtml_function_coverage=1 00:27:36.298 --rc genhtml_legend=1 00:27:36.298 --rc geninfo_all_blocks=1 00:27:36.298 --rc geninfo_unexecuted_blocks=1 00:27:36.298 00:27:36.298 ' 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:36.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.298 --rc genhtml_branch_coverage=1 00:27:36.298 --rc genhtml_function_coverage=1 00:27:36.298 --rc genhtml_legend=1 00:27:36.298 --rc geninfo_all_blocks=1 00:27:36.298 --rc geninfo_unexecuted_blocks=1 00:27:36.298 00:27:36.298 ' 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:36.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.298 --rc genhtml_branch_coverage=1 00:27:36.298 --rc genhtml_function_coverage=1 00:27:36.298 --rc genhtml_legend=1 00:27:36.298 --rc geninfo_all_blocks=1 00:27:36.298 --rc geninfo_unexecuted_blocks=1 00:27:36.298 00:27:36.298 ' 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:36.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.298 --rc genhtml_branch_coverage=1 00:27:36.298 --rc genhtml_function_coverage=1 00:27:36.298 --rc genhtml_legend=1 00:27:36.298 --rc geninfo_all_blocks=1 00:27:36.298 --rc geninfo_unexecuted_blocks=1 00:27:36.298 00:27:36.298 ' 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:36.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:36.298 11:38:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:41.576 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:41.576 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:41.576 Found net devices under 0000:86:00.0: cvl_0_0 00:27:41.576 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.837 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.837 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.837 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.837 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.837 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.837 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:41.838 Found net devices under 0000:86:00.1: cvl_0_1 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:41.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:27:41.838 00:27:41.838 --- 10.0.0.2 ping statistics --- 00:27:41.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.838 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:27:41.838 00:27:41.838 --- 10.0.0.1 ping statistics --- 00:27:41.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.838 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:41.838 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:42.098 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:42.099 ************************************ 00:27:42.099 START TEST nvmf_target_disconnect_tc1 00:27:42.099 ************************************ 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:42.099 [2024-11-19 11:38:55.779007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.099 [2024-11-19 11:38:55.779117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13ab0 with addr=10.0.0.2, port=4420 00:27:42.099 [2024-11-19 11:38:55.779170] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:42.099 [2024-11-19 11:38:55.779196] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:42.099 [2024-11-19 11:38:55.779217] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:42.099 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:42.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:42.099 Initializing NVMe Controllers 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:42.099 00:27:42.099 real 0m0.123s 00:27:42.099 user 0m0.049s 00:27:42.099 sys 0m0.073s 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:42.099 ************************************ 00:27:42.099 END TEST nvmf_target_disconnect_tc1 00:27:42.099 ************************************ 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:42.099 ************************************ 00:27:42.099 START TEST nvmf_target_disconnect_tc2 00:27:42.099 ************************************ 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2421424 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2421424 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2421424 ']' 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.099 11:38:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.359 [2024-11-19 11:38:55.925222] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:42.359 [2024-11-19 11:38:55.925275] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.360 [2024-11-19 11:38:56.004205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.360 [2024-11-19 11:38:56.047185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.360 [2024-11-19 11:38:56.047222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.360 [2024-11-19 11:38:56.047230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.360 [2024-11-19 11:38:56.047236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.360 [2024-11-19 11:38:56.047241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.360 [2024-11-19 11:38:56.048801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:42.360 [2024-11-19 11:38:56.048906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:42.360 [2024-11-19 11:38:56.049015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:42.360 [2024-11-19 11:38:56.049015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.620 Malloc0 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.620 [2024-11-19 11:38:56.217311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.620 [2024-11-19 11:38:56.249578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2421591 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:42.620 11:38:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:44.523 11:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2421424 00:27:44.523 11:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 [2024-11-19 11:38:58.277616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Read completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.523 starting I/O failed 00:27:44.523 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 [2024-11-19 11:38:58.277826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 [2024-11-19 11:38:58.278025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Read completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 Write completed with error (sct=0, sc=8) 00:27:44.524 starting I/O failed 00:27:44.524 [2024-11-19 11:38:58.278225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.524 [2024-11-19 11:38:58.278495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.524 [2024-11-19 11:38:58.278518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.524 qpair failed and we were unable to recover it. 00:27:44.524 [2024-11-19 11:38:58.278679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.524 [2024-11-19 11:38:58.278692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.524 qpair failed and we were unable to recover it. 00:27:44.524 [2024-11-19 11:38:58.278970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.524 [2024-11-19 11:38:58.279005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.524 qpair failed and we were unable to recover it. 00:27:44.524 [2024-11-19 11:38:58.279154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.524 [2024-11-19 11:38:58.279188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.524 qpair failed and we were unable to recover it. 00:27:44.524 [2024-11-19 11:38:58.279388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.524 [2024-11-19 11:38:58.279422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.524 qpair failed and we were unable to recover it. 00:27:44.524 [2024-11-19 11:38:58.279638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.524 [2024-11-19 11:38:58.279670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.279931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.279975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.280127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.280139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.280219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.280248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.280382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.280415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.280542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.280574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.280842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.280875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.280986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.280997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.281144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.281185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.281445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.281478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.281722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.281756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.281973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.282008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.282266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.282299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.282483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.282515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.282710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.282743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.282936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.282977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.283242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.283254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.283403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.283415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.283584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.283616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.283763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.283795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.284104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.284138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.284349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.284361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.284526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.284559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.284778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.284810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.285032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.285062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.285354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.285387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.285654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.285686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.285974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.286009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.286197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.286233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.286410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.286442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.286580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.525 [2024-11-19 11:38:58.286613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.525 qpair failed and we were unable to recover it. 00:27:44.525 [2024-11-19 11:38:58.286787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.286818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.287089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.287129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.287306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.287338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.287521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.287553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.287762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.287794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.288041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.288075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.288314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.288346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.288695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.288769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.289036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.289075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.289343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.289377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.289618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.289651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.289922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.289973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.290108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.290142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.290273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.290306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.290544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.290577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.290781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.290814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.290959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.290994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.291192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.291225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.291493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.291526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.291809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.291842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.292121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.292156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.292381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.292414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.292673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.292706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.292898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.292931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.293129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.293161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.293399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.293432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.293575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.293608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.293880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.293914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.294122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.294162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.294435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.294468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.294717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.294750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.294971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.526 [2024-11-19 11:38:58.295005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.526 qpair failed and we were unable to recover it. 00:27:44.526 [2024-11-19 11:38:58.295224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.295257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.295460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.295493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.295728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.295762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.295961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.295997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.296204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.296237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.296383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.296415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.296544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.296578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.296779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.296811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.297031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.297066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.297330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.297364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.297674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.297707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.297960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.297994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.298198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.298232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.298489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.298523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.298720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.298753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.527 [2024-11-19 11:38:58.298932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.527 [2024-11-19 11:38:58.298975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.527 qpair failed and we were unable to recover it. 00:27:44.804 [2024-11-19 11:38:58.299188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.804 [2024-11-19 11:38:58.299222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.804 qpair failed and we were unable to recover it. 00:27:44.804 [2024-11-19 11:38:58.299486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.804 [2024-11-19 11:38:58.299521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.804 qpair failed and we were unable to recover it. 00:27:44.804 [2024-11-19 11:38:58.299768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.804 [2024-11-19 11:38:58.299801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.804 qpair failed and we were unable to recover it. 00:27:44.804 [2024-11-19 11:38:58.299988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.804 [2024-11-19 11:38:58.300022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.804 qpair failed and we were unable to recover it. 00:27:44.804 [2024-11-19 11:38:58.300286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.804 [2024-11-19 11:38:58.300320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.804 qpair failed and we were unable to recover it. 00:27:44.804 [2024-11-19 11:38:58.300508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.804 [2024-11-19 11:38:58.300540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.804 qpair failed and we were unable to recover it. 00:27:44.804 [2024-11-19 11:38:58.300675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.300708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.300966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.301000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.301285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.301320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.301585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.301618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.301905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.301938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.302131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.302166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.302340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.302373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.302518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.302553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.302810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.302842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.303022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.303057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.303167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.303200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.303391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.303423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.303669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.303703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.303837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.303871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.304136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.304170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.304424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.304463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.304756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.304789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.305002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.305035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.305301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.305335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.305569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.305602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.305812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.305846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.306084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.306118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.306320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.306354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.306530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.306564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.306786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.306819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.307003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.307037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.307228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.307262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.307532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.307565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.307754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.805 [2024-11-19 11:38:58.307787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.805 qpair failed and we were unable to recover it. 00:27:44.805 [2024-11-19 11:38:58.307986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.308021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.308278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.308311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.308602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.308636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.308904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.308938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.309189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.309221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.309412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.309446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.309633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.309666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.309902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.309934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.310185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.310218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.310460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.310493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.310689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.310723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.310894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.310927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.311062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.311095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.311338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.311377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.311658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.311690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.311891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.311925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.312210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.312244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.312516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.312550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.312833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.312865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.312998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.313033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.313277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.313310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.313603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.313636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.313903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.313937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.314218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.314251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.314528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.314562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.314837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.314871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.315154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.315187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.315409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.315444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.315615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.315649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.315919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.315960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.316175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.316207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.316396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.316430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.806 qpair failed and we were unable to recover it. 00:27:44.806 [2024-11-19 11:38:58.316639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.806 [2024-11-19 11:38:58.316671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.316988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.317024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.317247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.317280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.317406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.317438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.317637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.317670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.317792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.317826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.318085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.318119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.318361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.318395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.318520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.318552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.318827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.318861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.319036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.319071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.319262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.319296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.319488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.319521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.319761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.319794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.320009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.320044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.320164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.320198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.320379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.320413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.320602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.320635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.320839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.320872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.321075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.321109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.321354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.321387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.321649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.321682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.321919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.321967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.322142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.322174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.322465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.322497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.322682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.322713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.322816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.322849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.323161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.323195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.323468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.323501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.323677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.323710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.807 [2024-11-19 11:38:58.323914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.807 [2024-11-19 11:38:58.323974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.807 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.324240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.324272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.324517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.324550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.324812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.324845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.325103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.325137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.325426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.325457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.325729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.325762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.325966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.326002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.326192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.326225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.326488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.326521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.326738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.326772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.326984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.327017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.327308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.327359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.327633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.327667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.327930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.327989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.328260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.328293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.328427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.328460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.328722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.328754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.329040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.329074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.329275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.329314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.329519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.329552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.329665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.329698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.329959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.329994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.330170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.330204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.330493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.330525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.330740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.330773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.330979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.331013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.331253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.331286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.331463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.331497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.808 [2024-11-19 11:38:58.331705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.808 [2024-11-19 11:38:58.331739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.808 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.332002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.332036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.332289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.332322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.332513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.332546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.332732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.332766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.332970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.333007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.333204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.333237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.333359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.333391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.333592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.333626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.333866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.333899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.334096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.334132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.334406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.334438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.334699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.334733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.334932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.334976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.335241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.335274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.335487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.335519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.335749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.335781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.335985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.336019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.336157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.336190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.336444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.336477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.336765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.336798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.337070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.337103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.337290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.337325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.337500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.337533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.337741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.337774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.338038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.338073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.338333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.338367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.338615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.338648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.338938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.809 [2024-11-19 11:38:58.338981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.809 qpair failed and we were unable to recover it. 00:27:44.809 [2024-11-19 11:38:58.339129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.339162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.339380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.339412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.339651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.339691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.339980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.340014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.340213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.340247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.340511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.340545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.340680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.340712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.340889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.340922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.341059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.341094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.341341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.341374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.341568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.341602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.341877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.341910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.342141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.342176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.342354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.342388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.342520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.342553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.342758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.342791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.343089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.343125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.343384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.343418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.343636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.343670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.343862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.343896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.344045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.344079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.344368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.344401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.344668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.810 [2024-11-19 11:38:58.344701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.810 qpair failed and we were unable to recover it. 00:27:44.810 [2024-11-19 11:38:58.344969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.345003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.345273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.345307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.345565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.345596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.345884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.345917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.346212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.346248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.346516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.346549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.346839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.346879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.347071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.347106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.347277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.347323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.347588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.347622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.347813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.347847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.348108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.348142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.348409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.348443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.348633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.348665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.348881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.348914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.349163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.349197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.349452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.349485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.349614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.349648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.349916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.349959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.350232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.350264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.350535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.350569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.350850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.350884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.351161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.351195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.351474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.351507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.351783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.351817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.352011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.352047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.352192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.352225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.352489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.352522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.352730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.352764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.353028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.353063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.353330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.353363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.811 [2024-11-19 11:38:58.353629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.811 [2024-11-19 11:38:58.353662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.811 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.353872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.353905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.354257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.354291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.354594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.354627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.354883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.354916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.355121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.355156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.355281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.355315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.355581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.355615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.355794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.355827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.356003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.356037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.356312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.356346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.356541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.356574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.356829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.356862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.357118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.357153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.357388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.357422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.357729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.357762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.357966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.358006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.358276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.358310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.358506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.358540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.358791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.358824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.359120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.359155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.359382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.359417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.359593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.359626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.359802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.359835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.360044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.360079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.360293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.360326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.360571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.360605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.360877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.360912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.361196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.361230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.361502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.361536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.361792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.361826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.812 qpair failed and we were unable to recover it. 00:27:44.812 [2024-11-19 11:38:58.362018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.812 [2024-11-19 11:38:58.362052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.362297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.362331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.362603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.362636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.362921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.362963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.363209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.363242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.363421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.363455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.363643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.363676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.363940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.363988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.364265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.364298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.364486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.364520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.364742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.364776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.364992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.365028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.365231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.365264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.365468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.365503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.365675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.365707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.365906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.365940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.366082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.366116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.366302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.366335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.366576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.366610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.366823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.366855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.367139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.367173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.367357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.367391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.367518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.367550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.367817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.367851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.368116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.368151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.368427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.368461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.368753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.368786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.368984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.369019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.369290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.369326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.369597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.369631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.369829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.813 [2024-11-19 11:38:58.369862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.813 qpair failed and we were unable to recover it. 00:27:44.813 [2024-11-19 11:38:58.370005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.370040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.370216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.370250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.370520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.370553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.370843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.370876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.371151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.371186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.371453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.371487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.371774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.371808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.372045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.372079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.372351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.372384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.372690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.372723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.372898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.372932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.373133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.373167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.373437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.373472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.373679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.373712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.373965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.374001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.374179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.374213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.374407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.374441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.374709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.374742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.374938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.374983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.375286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.375319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.375533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.375567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.375831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.375865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.376157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.376198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.376399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.376434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.376645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.376678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.376857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.376890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.377158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.377192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.377372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.377415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.377694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.377728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.377994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.378028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.378317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.814 [2024-11-19 11:38:58.378350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.814 qpair failed and we were unable to recover it. 00:27:44.814 [2024-11-19 11:38:58.378551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.378584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.378760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.378793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.379064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.379098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.379371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.379404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.379714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.379747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.380025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.380060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.380314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.380347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.380605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.380639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.380765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.380798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.381072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.381107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.381285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.381319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.381588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.381622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.381897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.381930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.382084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.382119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.382392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.382425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.382704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.382737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.382995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.383029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.383323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.383356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.383569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.383603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.383881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.383916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.384062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.384098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.384388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.384422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.384564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.384598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.384790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.384824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.385011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.385047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.385236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.385270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.385462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.385496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.385674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.815 [2024-11-19 11:38:58.385707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.815 qpair failed and we were unable to recover it. 00:27:44.815 [2024-11-19 11:38:58.385973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.386009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.386282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.386316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.386601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.386634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.386913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.386957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.387229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.387269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.387474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.387508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.387816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.387849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.388046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.388081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.388268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.388301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.388521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.388553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.388813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.388849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.389101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.389136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.389430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.389464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.389726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.389760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.390115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.390150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.390427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.390461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.390653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.390686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.390872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.390905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.391120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.391156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.391407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.391443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.391578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.391611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.391807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.391841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.392109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.392145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.392275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.392309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.392559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.392593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.392792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.392826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.393026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.393063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.393268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.816 [2024-11-19 11:38:58.393302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.816 qpair failed and we were unable to recover it. 00:27:44.816 [2024-11-19 11:38:58.393479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.393512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.393780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.393815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.394098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.394134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.394409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.394449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.394726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.394760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.394968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.395004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.395203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.395237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.395514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.395547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.395796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.395830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.396025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.396060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.396259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.396292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.396513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.396547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.396758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.396791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.397037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.397072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.397281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.397315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.397588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.397621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.397912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.397945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.398227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.398263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.398544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.398578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.398831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.398867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.398978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.399013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.399222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.399257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.399538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.399572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.399822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.399858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.400169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.400204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.400406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.400440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.400719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.400752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.401000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.401035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.401298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.401334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.401587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.401621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.401830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.401864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.402133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.402169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.402363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.402396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.402668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.817 [2024-11-19 11:38:58.402702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.817 qpair failed and we were unable to recover it. 00:27:44.817 [2024-11-19 11:38:58.402902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.402937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.403157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.403190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.403391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.403425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.403608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.403642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.403834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.403868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.404197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.404233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.404419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.404453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.404653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.404687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.404972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.405009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.405212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.405245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.405492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.405531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.405734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.405770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.406036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.406071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.406320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.406355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.406633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.406667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.406917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.406961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.407206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.407241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.407495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.407530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.407747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.407780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.407992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.408027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.408299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.408334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.408478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.408512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.408788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.408822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.409097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.409133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.409325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.409360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.409557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.409592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.409865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.409899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.410190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.410226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.410437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.410471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.410738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.410771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.411046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.411081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.411394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.818 [2024-11-19 11:38:58.411430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.818 qpair failed and we were unable to recover it. 00:27:44.818 [2024-11-19 11:38:58.411634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.411667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.411918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.411963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.412180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.412214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.412347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.412382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.412567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.412602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.412854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.412895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.413183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.413218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.413425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.413459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.413665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.413700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.413922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.413966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.414223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.414259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.414454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.414489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.414671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.414706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.414905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.414938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.415203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.415238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.415449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.415482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.415749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.415783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.416036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.416071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.416200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.416235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.416484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.416578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.416885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.416925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.417227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.417262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.417489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.417524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.417728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.417763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.418040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.418076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.418355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.418391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.418619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.418653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.418848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.418882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.419090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.419127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.419330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.419364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.819 qpair failed and we were unable to recover it. 00:27:44.819 [2024-11-19 11:38:58.419599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.819 [2024-11-19 11:38:58.419634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.419893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.419927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.420125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.420170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.420437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.420471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.420737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.420772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.420976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.421011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.421231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.421266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.421527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.421560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.421850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.421885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.422173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.422208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.422405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.422440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.422722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.422756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.423019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.423056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.423350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.423384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.423655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.423690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.423979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.424016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.424214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.424248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.424398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.424433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.424687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.424721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.425025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.425061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.425341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.425376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.425581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.425616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.425741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.425774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.426054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.426090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.426317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.426351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.426612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.426647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.426959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.426996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.427266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.427300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.427553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.427588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.427882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.427917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.428197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.428231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.428429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.428463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.428647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.428682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.820 qpair failed and we were unable to recover it. 00:27:44.820 [2024-11-19 11:38:58.428880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.820 [2024-11-19 11:38:58.428914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.429138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.429173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.429397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.429432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.429744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.429778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.430087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.430124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.430411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.430446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.430590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.430622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.430920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.430967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.431223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.431256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.431548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.431588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.431904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.431937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.432220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.432255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.432514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.432547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.432758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.432792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.433085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.433121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.433403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.433436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.433567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.433601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.433880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.433913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.434223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.434258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.434527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.434562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.434761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.434796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.435050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.435087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.435309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.435343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.435558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.435592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.435777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.435812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.436091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.436127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.436405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.436439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.436727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.821 [2024-11-19 11:38:58.436761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.821 qpair failed and we were unable to recover it. 00:27:44.821 [2024-11-19 11:38:58.437036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.437071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.437277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.437312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.437591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.437625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.437849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.437884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.438174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.438210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.438494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.438529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.438806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.438841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.439123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.439159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.439443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.439480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.439755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.439791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.440017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.440051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.440330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.440364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.440564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.440599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.440878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.440912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.441075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.441110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.441363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.441398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.441669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.441703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.441986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.442022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.442300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.442336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.442600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.442635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.442901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.442934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.443155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.443196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.443498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.443532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.443809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.443843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.444124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.444161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.444364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.444398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.444704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.444739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.445048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.445084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.445347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.445381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.445641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.445676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.445864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.445899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.446049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.446084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.446268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.446302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.822 [2024-11-19 11:38:58.446509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.822 [2024-11-19 11:38:58.446543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.822 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.446810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.446845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.447137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.447172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.447439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.447473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.447735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.447771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.447972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.448008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.448283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.448317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.448602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.448637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.448837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.448871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.449153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.449189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.449471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.449505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.449693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.449727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.449864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.449897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.450189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.450224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.450409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.450444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.450661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.450695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.450903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.450938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.451159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.451194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.451377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.451411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.451561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.451595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.451790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.451824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.452110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.452147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.452421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.452456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.452745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.452779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.453082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.453117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.453382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.453416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.453604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.453639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.453894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.453930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.454235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.454276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.454560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.454594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.454744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.454778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.454969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.455004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.455155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.455190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.823 qpair failed and we were unable to recover it. 00:27:44.823 [2024-11-19 11:38:58.455447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.823 [2024-11-19 11:38:58.455483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.455772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.455806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.456097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.456133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.456337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.456371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.456628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.456662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.456866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.456900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.457170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.457206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.457390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.457424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.457707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.457741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.457970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.458006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.458261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.458296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.458503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.458536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.458807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.458842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.459038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.459073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.459342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.459377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.459579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.459613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.459895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.459928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.460235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.460269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.460558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.460593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.460870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.460903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.461140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.461176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.461451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.461485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.461752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.461786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.462084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.462120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.462370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.462404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.462660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.462694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.462975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.463011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.463293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.463329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.463608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.463642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.463849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.463884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.464149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.464186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.464378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.464412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.824 [2024-11-19 11:38:58.464541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.824 [2024-11-19 11:38:58.464575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.824 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.464852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.464886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.465024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.465059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.465279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.465313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.465576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.465610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.465909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.465943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.466204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.466240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.466468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.466503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.466774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.466809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.467027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.467063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.467319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.467353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.467550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.467585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.467792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.467826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.468025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.468062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.468361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.468395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.468589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.468624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.468909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.468944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.469161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.469196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.469478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.469513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.469741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.469774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.469991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.470027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.470311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.470345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.470545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.470580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.470833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.470867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.471067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.471103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.471287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.471321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.471511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.471545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.471753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.471787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.472069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.472104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.472317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.472352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.472535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.472575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.472833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.472867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.825 [2024-11-19 11:38:58.473143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.825 [2024-11-19 11:38:58.473178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.825 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.473437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.473473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.473778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.473813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.474089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.474124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.474323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.474359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.474636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.474670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.474894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.474929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.475157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.475190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.475476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.475510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.475724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.475759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.475895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.475931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.476219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.476253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.476501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.476536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.476735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.476770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.476980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.477017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.477321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.477355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.477633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.477667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.477958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.477994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.478261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.478296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.478484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.478517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.478796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.478831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.478982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.479023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.479278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.479312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.479588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.479622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.479807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.479842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.480047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.480083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.480292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.480326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.480562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.480596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.480796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.480830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.481045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.481081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.481384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.481419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.481696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.481732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.481984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.826 [2024-11-19 11:38:58.482021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.826 qpair failed and we were unable to recover it. 00:27:44.826 [2024-11-19 11:38:58.482210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.482245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.482524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.482558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.482818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.482853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.483145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.483181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.483363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.483397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.483589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.483634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.483905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.483939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.484190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.484224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.484430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.484465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.484659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.484693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.484904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.484939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.485070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.485104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.485331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.485366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.485646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.485681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.485971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.486008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.486203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.486237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.486515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.486550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.486690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.486724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.487000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.487037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.487292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.487326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.487602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.487638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.487841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.487875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.488133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.488169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.488357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.488390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.488593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.488627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.488878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.488911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.489212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.827 [2024-11-19 11:38:58.489248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.827 qpair failed and we were unable to recover it. 00:27:44.827 [2024-11-19 11:38:58.489531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.489564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.489818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.489852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.490070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.490105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.490312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.490346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.490622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.490656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.490941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.491000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.491204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.491237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.491494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.491528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.491723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.491758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.492012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.492048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.492326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.492361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.492647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.492682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.492877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.492911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.493187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.493223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.493451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.493485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.493766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.493801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.493987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.494048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.494311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.494346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.494557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.494598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.494781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.494815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.495042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.495076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.495386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.495421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.495700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.495733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.495987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.496022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.496278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.496311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.496615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.496650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.496933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.496976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.497250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.497285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.497472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.828 [2024-11-19 11:38:58.497506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.828 qpair failed and we were unable to recover it. 00:27:44.828 [2024-11-19 11:38:58.497774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.497809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.498082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.498118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.498405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.498439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.498670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.498703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.498984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.499018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.499226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.499262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.499491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.499524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.499752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.499787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.500001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.500037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.500322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.500357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.500562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.500596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.500852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.500887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.501197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.501232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.501452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.501487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.501739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.501773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.502037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.502073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.502368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.502403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.502666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.502701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.502927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.502985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.503288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.503323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.503595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.503630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.503815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.503848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.504120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.504157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.504359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.504393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.504650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.504685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.504967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.505002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.505195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.505230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.505509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.505543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.505800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.505834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.505968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.506010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.506209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.506244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.829 [2024-11-19 11:38:58.506501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.829 [2024-11-19 11:38:58.506536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.829 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.506765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.506800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.507079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.507115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.507399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.507435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.507649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.507682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.507977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.508012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.508195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.508230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.508443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.508477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.508707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.508740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.509022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.509059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.509188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.509223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.509377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.509412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.509698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.509732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.510013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.510048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.510309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.510343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.510494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.510527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.510840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.510875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.511133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.511167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.511423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.511458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.511758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.511792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.512039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.512074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.512371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.512405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.512703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.512738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.513039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.513075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.513332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.513367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.513662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.513697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.513973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.514009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.514132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.514166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.514350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.514384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.514604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.514637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.514891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.514926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.830 qpair failed and we were unable to recover it. 00:27:44.830 [2024-11-19 11:38:58.515073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.830 [2024-11-19 11:38:58.515108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.515383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.515418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.515605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.515639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.515904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.515938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.516216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.516250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.516400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.516435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.516714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.516748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.516962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.517003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.517310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.517343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.517571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.517605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.517863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.517897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.518182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.518217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.518417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.518450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.518645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.518680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.518967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.519003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.519273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.519308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.519564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.519598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.519782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.519817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.520101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.520137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.520401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.520436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.520658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.520693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.520898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.520932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.521240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.521275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.521529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.521564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.521851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.521885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.522102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.522138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.522343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.522378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.522654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.522688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.522896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.522930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.523165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.523201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.523478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.523512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.523725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.523758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.831 qpair failed and we were unable to recover it. 00:27:44.831 [2024-11-19 11:38:58.524034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.831 [2024-11-19 11:38:58.524072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.524327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.524360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.524651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.524686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.524968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.525004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.525153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.525187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.525385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.525419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.525955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.525991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.526190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.526224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.526480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.526515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.526710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.526744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.527028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.527064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.527348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.527384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.527566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.527601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.527863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.527896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.528113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.528149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.528335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.528374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.528576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.528610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.528837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.528871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.529150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.529187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.529448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.529481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.529612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.529645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.529923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.529965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.530270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.530303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.530589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.530622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.530905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.530939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.531268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.531301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.531517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.531551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.531734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.531766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.532048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.832 [2024-11-19 11:38:58.532083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.832 qpair failed and we were unable to recover it. 00:27:44.832 [2024-11-19 11:38:58.532324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.532358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.532556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.532590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.532795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.532828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.533026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.533061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.533179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.533213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.533513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.533546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.533838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.533872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.534077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.534112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.534316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.534349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.534495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.534531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.534794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.534833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.534999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.535044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.535245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.535284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.535522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.535559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.535764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.535800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.536031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.536068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.536255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.536291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.536496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.536535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.536804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.536839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.537102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.537138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.537346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.537380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.537604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.537639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.537853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.537893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.538190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.538228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.538412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.538449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.538724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.538759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.538994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.539044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.539348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.539387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.539667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.539705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.539980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.540016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.833 qpair failed and we were unable to recover it. 00:27:44.833 [2024-11-19 11:38:58.540301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.833 [2024-11-19 11:38:58.540339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.540571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.540612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.540815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.540856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.541138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.541175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.541324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.541360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.541485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.541518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.541794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.541835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.542033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.542071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.542331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.542368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.542641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.542677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.542903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.542939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.543134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.543171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.543384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.543425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.543652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.543691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.543830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.543867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.544120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.544155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.544430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.544465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.544662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.544699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.544933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.544990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.545315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.545351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.545591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.545627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.545885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.545921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.546154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.546194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.546428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.546465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.546621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.546659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.546783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.546818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.547102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.547139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.547368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.547404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.547619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.547656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.834 qpair failed and we were unable to recover it. 00:27:44.834 [2024-11-19 11:38:58.547850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.834 [2024-11-19 11:38:58.547887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.548041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.548078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.548339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.548373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.548661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.548701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.548973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.549014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.549291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.549327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.549516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.549552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.549815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.549858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.550127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.550163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.550443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.550480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.550690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.550725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.550864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.550898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.551044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.551081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.551339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.551378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.551530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.551566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.551874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.551908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.552114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.552151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.552368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.552405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.552721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.552755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.552991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.553031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.553242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.553279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.553593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.553628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.553835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.553871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.554144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.554181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.554321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.554355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.554543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.554577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.554841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.554880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.555212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.555249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.555528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.555564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.555826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.555864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.556162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.556198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.835 qpair failed and we were unable to recover it. 00:27:44.835 [2024-11-19 11:38:58.556411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.835 [2024-11-19 11:38:58.556451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.556727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.556764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.557044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.557081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.557324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.557428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.557652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.557695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.557988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.558028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.558315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.558349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.558646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.558682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.558872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.558916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.559149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.559188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.559411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.559447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.559718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.559753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.559975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.560014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.560224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.560259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.560404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.560443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.560573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.560607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.560895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.560942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.561209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.561245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.561477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.561511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.561771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.561808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.562018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.562056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.562243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.562280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.562472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.562507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.562722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.562758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.562968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.563006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.563206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.563245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.563446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.563480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.563678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.563715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.563975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.564012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.564212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.564248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.564466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.564501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.836 qpair failed and we were unable to recover it. 00:27:44.836 [2024-11-19 11:38:58.564685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.836 [2024-11-19 11:38:58.564724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.837 qpair failed and we were unable to recover it. 00:27:44.837 [2024-11-19 11:38:58.564985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.837 [2024-11-19 11:38:58.565024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.837 qpair failed and we were unable to recover it. 00:27:44.837 [2024-11-19 11:38:58.565290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.837 [2024-11-19 11:38:58.565325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:44.837 qpair failed and we were unable to recover it. 00:27:45.116 [2024-11-19 11:38:58.565583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.565621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.565825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.565861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.565995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.566032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.566244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.566280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.566605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.566642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.566801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.566836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.567017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.567053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.567205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.567240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.567522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.567558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.567845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.567896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.568057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.568094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.568372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.568406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.568683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.568718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.568846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.568881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.569162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.569199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.569419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.569453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.569586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.569621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.569825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.569861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.570080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.570116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.570270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.570304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.570507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.570542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.570822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.570857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.571080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.571124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.571321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.571356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.571551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.571585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.571866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.571901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.572054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.572089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.572299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.572337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.572479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.572514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.572706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.572743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.572891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.572930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.573206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.573248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.573474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.573512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.573771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.573809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.574893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.574963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.117 qpair failed and we were unable to recover it. 00:27:45.117 [2024-11-19 11:38:58.575276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.117 [2024-11-19 11:38:58.575314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.575613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.575649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.575938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.575993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.576141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.576178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.576362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.576400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.576613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.576649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.576922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.576969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.577171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.577205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.577457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.577499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.577765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.577809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.578120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.578157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.578385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.578421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.578635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.578669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.578963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.579002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.579238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.579275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.579476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.579511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.579658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.579694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.579971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.580008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.580317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.580353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.580655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.580689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.580975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.581013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.581238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.581273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.581543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.581577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.581860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.581895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.582066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.582102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.582263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.582299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.582457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.582490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.582764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.582806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.583085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.583121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.583321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.583356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.583592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.583626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.583771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.583806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.583961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.583996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.584278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.584314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.584537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.584572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.584773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.584809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.585069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.585108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.585242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.118 [2024-11-19 11:38:58.585278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.118 qpair failed and we were unable to recover it. 00:27:45.118 [2024-11-19 11:38:58.585483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.585517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.585793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.585828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.586030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.586066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.586201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.586236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.586369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.586403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.586623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.586658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.586803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.586838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.587042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.587079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.587283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.587319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.587449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.587482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.587683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.587718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.587920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.587976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.588108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.588142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.588410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.588444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.588643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.588677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.588878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.588912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.589059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.589095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.589302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.589338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.589542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.589578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.589744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.589779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.590001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.590037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.590231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.590265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.590396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.590431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.590650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.590684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.590832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.590866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.591000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.591036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.591249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.591285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.591407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.591443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.591563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.591598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.591782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.591823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.592038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.592075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.592212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.592247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.592380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.592415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.592610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.592645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.592959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.592994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.593195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.593229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.593506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.593540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.119 [2024-11-19 11:38:58.593726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.119 [2024-11-19 11:38:58.593760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.119 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.594030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.594067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.594210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.594244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.594440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.594475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.594701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.594737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.594941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.594990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.595202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.595238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.595535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.595570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.595853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.595888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.596136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.596173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.596483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.596518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.596641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.596674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.596895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.596930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.597157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.597192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.597474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.597509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.597749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.597784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.597921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.597966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.598135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.598172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.598433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.598467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.598759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.598795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.598980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.599016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.599219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.599255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.599462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.599496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.599800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.599835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.600092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.600127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.600385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.600420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.600722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.600757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.601039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.601075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.601354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.601388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.601651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.601686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.601987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.602023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.602169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.602204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.602463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.602503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.602697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.602732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.602861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.602894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.603138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.603175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.603379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.603413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.603619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.120 [2024-11-19 11:38:58.603653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.120 qpair failed and we were unable to recover it. 00:27:45.120 [2024-11-19 11:38:58.603847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.603881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.604114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.604151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.604349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.604382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.604680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.604714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.604979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.605016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.605236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.605271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.605524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.605557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.605857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.605891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.606205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.606241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.606520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.606556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.606775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.606810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.607102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.607137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.607283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.607317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.607541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.607576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.607802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.607838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.608065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.608105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.608362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.608397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.608606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.608640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.608858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.608891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.609235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.609272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.609549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.609583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.609770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.609821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.610088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.610124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.610450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.610485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.610713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.610747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.611032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.611068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.611266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.611299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.611493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.611527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.611807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.611840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.612145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.612182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.612438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.121 [2024-11-19 11:38:58.612473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.121 qpair failed and we were unable to recover it. 00:27:45.121 [2024-11-19 11:38:58.612771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.612805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.612943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.612992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.613253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.613289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.613541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.613574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.613839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.613872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.614087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.614122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.614320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.614355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.614558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.614592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.614846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.614880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.615082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.615116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.615382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.615417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.615705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.615740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.615993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.616028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.616249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.616283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.616474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.616509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.616718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.616752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.616983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.617019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.617211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.617244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.617521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.617555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.617749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.617782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.617941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.617988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.618241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.618274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.618492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.618527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.618644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.618675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.618960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.618995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.619249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.619282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.619582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.619615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.619893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.619927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.620218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.620252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.620480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.620513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.620773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.620814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.621094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.621131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.621406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.621439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.621660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.621694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.622000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.622035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.622286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.622319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.622630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.622665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.122 [2024-11-19 11:38:58.622859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.122 [2024-11-19 11:38:58.622892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.122 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.623189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.623223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.623497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.623530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.623842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.623875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.624092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.624128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.624407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.624440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.624645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.624679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.624818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.624852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.625113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.625149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.625369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.625403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.625539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.625572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.625786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.625820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.626033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.626069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.626342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.626376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.626526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.626559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.626750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.626784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.626899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.626930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.627170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.627204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.627482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.627516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.627646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.627679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.627819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.627853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.627982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.628018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.628156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.628189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.628314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.628347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.628489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.628523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.628744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.628778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.628972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.629009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.629149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.629184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.629388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.629422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.629624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.629659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.629854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.629889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.630106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.630142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.630395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.630428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.630632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.630672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.630876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.630910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.631060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.631095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.631345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.631379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.631584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.631619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.123 qpair failed and we were unable to recover it. 00:27:45.123 [2024-11-19 11:38:58.631879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.123 [2024-11-19 11:38:58.631913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.632122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.632157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.632280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.632312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.632455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.632489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.632764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.632797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.632982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.633018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.633167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.633203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.633337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.633370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.633494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.633528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.633742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.633776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.634050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.634085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.634292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.634326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.634541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.634573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.634826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.634860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.635066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.635101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.635382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.635417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.635679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.635712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.635980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.636015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.636218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.636261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.636440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.636473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.636669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.636702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.636973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.637008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.637161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.637194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.637468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.637502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.637726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.637760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.637937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.637982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.638186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.638219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.638435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.638468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.638624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.638657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.638803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.638838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.639037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.639072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.639206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.639239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.639518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.639551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.639696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.639729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.639843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.639876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.640018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.640059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.640312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.640346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.640595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.124 [2024-11-19 11:38:58.640629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.124 qpair failed and we were unable to recover it. 00:27:45.124 [2024-11-19 11:38:58.640825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.640859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.641074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.641109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.641302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.641336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.641531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.641564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.641747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.641781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.641971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.642006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.642132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.642165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.642361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.642395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.642657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.642692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.642819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.642852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.642993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.643028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.643222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.643256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.643448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.643482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.643601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.643634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.643928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.643992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.644110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.644144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.644273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.644307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.644488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.644521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.644808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.644842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.645023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.645058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.645278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.645311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.645509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.645542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.645679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.645712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.645902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.645935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.646168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.646203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.646385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.646419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.646686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.646719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.646988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.647024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.647211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.647245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.647374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.647407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.647588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.647622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.647762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.647796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.647991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.648026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.648221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.648255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.648483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.648517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.648693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.648727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.648931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.648977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.125 [2024-11-19 11:38:58.649128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.125 [2024-11-19 11:38:58.649168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.125 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.649359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.649392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.649591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.649625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.649814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.649847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.650120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.650155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.650339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.650371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.650555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.650589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.650727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.650761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.651012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.651048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.651229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.651262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.651403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.651437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.651577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.651610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.651790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.651824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.652025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.652060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.652258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.652291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.652566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.652599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.652730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.652764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.652941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.652987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.653192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.653227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.653354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.653393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.653531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.653565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.653691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.653724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.653870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.653903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.654104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.654139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.654279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.654313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.654562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.654594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.654841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.654875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.655129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.655164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.655303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.655336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.655654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.655687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.655822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.655856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.656010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.656044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.656188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.656230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.656412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.126 [2024-11-19 11:38:58.656444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.126 qpair failed and we were unable to recover it. 00:27:45.126 [2024-11-19 11:38:58.656625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.656659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.656932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.656976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.657105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.657139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.657347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.657380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.657561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.657595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.657717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.657750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.657996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.658042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.658178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.658212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.658462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.658495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.658698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.658732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.658933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.658980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.659260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.659293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.659541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.659575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.659754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.659788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.659984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.660019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.660267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.660300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.660500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.660534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.660736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.660769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.660978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.661013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.661281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.661315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.661530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.661564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.661808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.661841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.662039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.662075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.662231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.662265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.662520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.662553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.662763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.662797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.663071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.663106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.663302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.663336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.663533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.663567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.663704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.663737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.663863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.663897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.664120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.664156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.664283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.664315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.664568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.664602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.664724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.664756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.665004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.665039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.665218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.665252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.665439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.127 [2024-11-19 11:38:58.665472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.127 qpair failed and we were unable to recover it. 00:27:45.127 [2024-11-19 11:38:58.665599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.665632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.665898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.665930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.666079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.666112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.666313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.666346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.666534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.666568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.666841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.666874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.666986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.667021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.667292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.667325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.667514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.667554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.667745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.667778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.667902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.667935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.668150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.668186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.668373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.668407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.668670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.668704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.668838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.668872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.669067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.669102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.669370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.669404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.669677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.669710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.669985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.670021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.670219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.670252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.670380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.670414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.670541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.670574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.670770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.670803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.671024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.671058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.671201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.671235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.671383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.671416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.671535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.671569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.671687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.671721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.671895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.671929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.672194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.672229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.672480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.672513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.672628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.672661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.672913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.672959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.673145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.673178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.673385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.673418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.673545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.673578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.673711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.673745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.128 [2024-11-19 11:38:58.673920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.128 [2024-11-19 11:38:58.673964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.128 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.674167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.674200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.674400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.674434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.674574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.674607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.674877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.674910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.675128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.675163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.675383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.675416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.675549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.675582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.675701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.675735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.675921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.675963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.676148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.676181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.676401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.676440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.676700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.676733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.676996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.677032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.677218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.677252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.677513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.677544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.677731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.677765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.678044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.678079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.678203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.678237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.678343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.678376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.678621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.678654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.678829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.678863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.679142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.679177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.679373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.679406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.679650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.679683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.679878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.679912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.680111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.680146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.680394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.680427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.680614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.680646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.680917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.680964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.681234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.681266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.681513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.681547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.681798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.681831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.682019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.682054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.682251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.682284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.682476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.682510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.682705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.682738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.682925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.682969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.129 [2024-11-19 11:38:58.683154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.129 [2024-11-19 11:38:58.683188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.129 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.683378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.683411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.683624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.683657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.683785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.683819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.684027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.684062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.684253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.684287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.684422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.684455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.684563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.684596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.684702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.684733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.685001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.685036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.685163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.685196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.685384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.685416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.685547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.685581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.685697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.685736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.685963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.685998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.686246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.686280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.686523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.686556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.686785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.686818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.687049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.687084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.687206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.687240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.687445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.687478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.687625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.687659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.687844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.687877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.688123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.688158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.688280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.688314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.688505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.688538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.688779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.688813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.688940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.688983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.689089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.689122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.689306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.689338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.689492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.689525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.689643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.689675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.689870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.689904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.690098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.130 [2024-11-19 11:38:58.690132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.130 qpair failed and we were unable to recover it. 00:27:45.130 [2024-11-19 11:38:58.690241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.690274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.690456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.690489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.690675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.690708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.690880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.690912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.691140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.691174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.691306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.691338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.691524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.691559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.691750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.691783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.691991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.692027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.692277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.692311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.692494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.692528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.692792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.692826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.693032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.693067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.693250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.693284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.693470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.693504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.693770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.693804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.694023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.694058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.694232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.694264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.694386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.694418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.694633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.694673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.694843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.694876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.695053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.695089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.695265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.695297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.695426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.695460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.695592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.695625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.695835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.695868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.696110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.696145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.696260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.696293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.696509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.696542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.696679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.696712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.696897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.696930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.697080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.697115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.697358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.697390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.697522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.697555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.697677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.697711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.697906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.697939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.698219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.698252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.131 qpair failed and we were unable to recover it. 00:27:45.131 [2024-11-19 11:38:58.698514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.131 [2024-11-19 11:38:58.698547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.698649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.698682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.698814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.698847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.699076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.699111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.699234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.699268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.699464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.699497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.699695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.699728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.699913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.699946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.700156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.700189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.700383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.700417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.700603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.700636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.700813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.700846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.701020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.701055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.701236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.701269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.701536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.701568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.701760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.701793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.701993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.702047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.702248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.702281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.702481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.702514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.702722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.702755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.703019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.703053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.703253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.703287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.703539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.703582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.703776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.703809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.703994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.704030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.704329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.704362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.704482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.704512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.704774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.704808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.704980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.705015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.705290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.705324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.705457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.705490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.705757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.705790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.705909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.705943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.706081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.706115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.706357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.706391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.706581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.706614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.706806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.706839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.707105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.132 [2024-11-19 11:38:58.707139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.132 qpair failed and we were unable to recover it. 00:27:45.132 [2024-11-19 11:38:58.707325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.707358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.707544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.707578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.707759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.707792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.707990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.708025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.708155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.708189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.708318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.708351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.708595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.708628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.708756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.708790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.708969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.709005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.709150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.709184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.709357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.709390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.709574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8af0 is same with the state(6) to be set 00:27:45.133 [2024-11-19 11:38:58.709932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.710027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.710263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.710300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.710417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.710453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.710568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.710601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.710717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.710750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.710942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.710990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.711203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.711235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.711373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.711407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.711598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.711631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.711765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.711798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.712007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.712042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.712179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.712212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.712389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.712421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.712552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.712584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.712825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.712858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.713043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.713078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.713322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.713355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.713530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.713564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.713750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.713784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.714051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.714085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.714209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.714243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.714378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.714412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.714588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.714621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.714810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.714844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.714971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.715006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.715124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.715158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-11-19 11:38:58.715267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.133 [2024-11-19 11:38:58.715308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.715502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.715535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.715733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.715767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.715968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.716003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.716196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.716230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.716424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.716456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.716643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.716676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.716816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.716849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.717042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.717078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.717260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.717293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.717486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.717520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.717654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.717687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.717876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.717909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.718124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.718159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.718340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.718373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.718588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.718621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.718892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.718925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.719150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.719184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.719366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.719399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.719609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.719642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.719769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.719802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.719924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.719966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.720156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.720188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.720362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.720396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.720584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.720616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.720750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.720783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.721023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.721057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.721187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.721221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.721353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.721386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.721573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.721606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.721796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.721829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.722078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.722113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.722304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.722337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.722528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.722562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.722758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.722790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.723032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.723067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.723207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.723240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.723482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.134 [2024-11-19 11:38:58.723515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-11-19 11:38:58.723790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.723823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.723966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.724000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.724190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.724229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.724417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.724450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.724687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.724720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.724983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.725019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.725139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.725171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.725357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.725390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.725575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.725607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.725795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.725828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.725944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.725985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.726111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.726144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.726410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.726443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.726643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.726676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.726918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.726958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.727137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.727171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.727312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.727346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.727542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.727575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.727823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.727855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.727980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.728014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.728225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.728258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.728383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.728416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.728664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.728697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.728881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.728914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.729045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.729083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.729360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.729394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.729533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.729565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.729749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.729783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.729997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.730033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.730231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.730264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.730458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.730490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.730738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.730772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.730905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.730938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.135 [2024-11-19 11:38:58.731125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.135 [2024-11-19 11:38:58.731159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.135 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.731333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.731366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.731487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.731520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.731709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.731741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.731866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.731900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.732085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.732119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.732298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.732332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.732514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.732547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.732750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.732783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.732973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.733013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.733188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.733221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.733348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.733381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.733516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.733549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.733751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.733784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.734038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.734075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.734200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.734233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.734358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.734391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.734602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.734635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.734821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.734855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.735029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.735064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.735176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.735210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.735384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.735417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.735599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.735631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.735878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.735911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.736122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.736155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.736401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.736434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.736626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.736659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.736868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.736900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.737044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.737079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.737268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.737301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.737411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.737444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.737567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.737599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.737835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.737869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.738107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.738142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.738319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.738353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.738530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.738563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.738740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.738773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.738880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.738913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.136 qpair failed and we were unable to recover it. 00:27:45.136 [2024-11-19 11:38:58.739185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.136 [2024-11-19 11:38:58.739219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.739392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.739424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.739631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.739664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.739836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.739869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.740054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.740088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.740356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.740389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.740521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.740554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.740764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.740797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.740922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.740961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.741143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.741175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.741359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.741391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.741512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.741551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.741832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.741864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.742073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.742109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.742382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.742414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.742657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.742690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.742859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.742890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.743091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.743125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.743307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.743339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.743532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.743565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.743705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.743737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.744005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.744040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.744210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.744241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.744441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.744474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.744587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.744619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.744845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.744878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.745071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.745105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.745297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.745330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.745457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.745488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.745608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.745640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.745831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.745864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.746130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.746164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.746357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.746390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.746610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.746643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.746771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.746804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.746921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.746960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.747143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.747176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.747351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.137 [2024-11-19 11:38:58.747383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.137 qpair failed and we were unable to recover it. 00:27:45.137 [2024-11-19 11:38:58.747634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.747709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.747837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.747874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.748059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.748095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.748218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.748251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.748422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.748455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.748716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.748749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.749017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.749053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.749321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.749356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.749532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.749565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.749709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.749742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.750009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.750043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.750257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.750291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.750539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.750572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.750775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.750807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.750931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.750980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.751122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.751156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.751343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.751377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.751503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.751536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.751719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.751753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.751968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.752003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.752268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.752301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.752480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.752512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.752708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.752741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.752878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.752910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.753103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.753137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.753349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.753383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.753511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.753544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.753736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.753775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.753971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.754006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.754190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.754224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.754347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.754379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.754582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.754614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.754798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.754829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.755067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.755100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.755273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.755305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.755555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.755588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.755722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.755754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.755888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.755921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.138 qpair failed and we were unable to recover it. 00:27:45.138 [2024-11-19 11:38:58.756112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.138 [2024-11-19 11:38:58.756146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.756329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.756362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.756553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.756586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.756727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.756760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.756960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.756993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.757179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.757213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.757336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.757369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.757476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.757509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.757639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.757672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.757912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.757966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.758159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.758193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.758315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.758348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.758469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.758502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.758752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.758785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.759030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.759065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.759272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.759305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.759574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.759612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.759797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.759830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.760021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.760056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.760240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.760273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.760454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.760487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.760607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.760638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.760837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.760870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.761116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.761151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.761398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.761430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.761618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.761651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.761904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.761937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.762068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.762100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.762289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.762322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.762563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.762597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.762882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.762916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.763105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.763140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.763376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.763409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.763535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.763567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.763812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.763844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.764019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.764054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.764237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.764269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.139 qpair failed and we were unable to recover it. 00:27:45.139 [2024-11-19 11:38:58.764510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.139 [2024-11-19 11:38:58.764543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.764745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.764778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.764906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.764939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.765081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.765116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.765220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.765253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.765424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.765457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.765636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.765669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.765800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.765833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.765959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.765992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.766183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.766216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.766452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.766485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.766693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.766725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.766989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.767023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.767207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.767240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.767433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.767466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.767659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.767692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.767867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.767900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.768152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.768187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.768372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.768406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.768577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.768609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.768778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.768816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.769056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.769090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.769217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.769250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.769435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.769469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.769644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.769677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.769851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.769883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.770145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.770179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.140 qpair failed and we were unable to recover it. 00:27:45.140 [2024-11-19 11:38:58.770415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.140 [2024-11-19 11:38:58.770448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.770632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.770664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.770843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.770876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.771062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.771097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.771275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.771308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.771498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.771531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.771636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.771669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.771785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.771819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.771987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.772021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.772193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.772226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.772411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.772443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.772682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.772715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.772840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.772873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.773059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.773094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.773333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.773366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.773492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.773526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.773658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.773690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.773961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.773999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.774182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.774215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.774405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.774438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.774678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.774710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.774992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.775027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.775276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.775309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.775434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.775468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.775583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.775616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.775732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.775766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.776000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.141 [2024-11-19 11:38:58.776035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.141 qpair failed and we were unable to recover it. 00:27:45.141 [2024-11-19 11:38:58.776296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.776329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.776468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.776502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.776621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.776654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.776892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.776923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.777171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.777204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.777380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.777412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.777617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.777650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.777774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.777807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.777921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.777962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.778231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.778264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.778502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.778534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.778724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.778757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.778933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.778975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.779168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.779200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.779442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.779475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.779735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.779768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.779888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.779920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.780120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.780154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.780289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.780321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.780440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.780472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.780734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.780767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.780970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.781005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.781191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.781224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.781499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.781532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.781793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.781825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.781942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.782003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.782129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.782162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.782428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.782462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.782737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.782770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.782938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.782983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.783154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.783187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.783316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.783348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.783452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.783485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.783619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.783653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.783765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.783802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.784041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.784075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.784260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.784292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.142 qpair failed and we were unable to recover it. 00:27:45.142 [2024-11-19 11:38:58.784429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.142 [2024-11-19 11:38:58.784461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.784576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.784608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.784724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.784757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.784880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.784912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.785031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.785064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.785255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.785288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.785465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.785498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.785671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.785703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.785878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.785911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.786108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.786141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.786342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.786376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.786508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.786541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.786673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.786706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.786897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.786929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.787057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.787091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.787373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.787406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.787689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.787723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.787914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.787946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.788100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.788133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.788320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.788353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.788609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.788643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.788822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.788855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.789042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.789077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.789203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.789237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.789406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.789439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.789617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.789650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.789822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.789856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.789974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.790009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.790254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.790286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.790550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.790582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.790765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.790797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.790976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.791011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.791184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.791216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.791388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.791422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.791597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.791628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.791752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.791785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.791967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.792001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.792114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.792148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.792272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.792310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.792482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.792515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.792706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.792739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.792975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.793010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.793277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.793309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.793489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.793522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.793718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.793751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.793857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.793888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.794161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.794195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.143 [2024-11-19 11:38:58.794314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.143 [2024-11-19 11:38:58.794347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.143 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.794541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.794573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.794744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.794777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.794985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.795020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.795153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.795186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.795367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.795400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.795531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.795564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.795737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.795770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.795907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.795939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.796151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.796186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.796315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.796347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.796451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.796483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.796591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.796624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.796803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.796836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.797083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.797119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.797366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.797399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.797683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.797716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.797968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.798003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.798267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.798307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.798484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.798517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.798653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.798686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.798897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.798930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.799127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.799161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.799278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.799310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.799499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.799533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.799747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.799780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.799981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.800016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.800204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.800238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.800358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.800391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.800650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.800683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.800937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.800981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.801098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.801131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.801255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.801288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.801461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.801493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.801667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.801700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.801813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.801844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.802081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.802117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.802299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.802332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.802600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.802633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.802885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.802918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.803176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.803247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.803452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.803490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.144 qpair failed and we were unable to recover it. 00:27:45.144 [2024-11-19 11:38:58.803733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.144 [2024-11-19 11:38:58.803767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.803887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.803920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.804131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.804164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.804363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.804397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.804529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.804562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.804747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.804780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.804906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.804939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.805074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.805108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.805320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.805354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.805466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.805498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.805684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.805719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.805894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.805927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.806219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.806253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.806511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.806543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.806787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.806821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.807057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.807091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.807215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.807247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.807526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.807559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.807810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.807842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.807982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.808016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.808285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.808320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.808532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.808564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.808752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.808785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.809045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.809080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.809277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.809310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.809493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.809526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.809704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.809736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.809860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.809894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.810074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.810108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.810284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.810316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.810531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.810569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.810755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.810788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.810902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.810934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.811191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.811225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.811462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.811494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.811689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.811721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.811918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.811959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.812157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.812191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.812448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.812481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.812719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.812751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.813035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.813070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.813208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.813241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.813430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.813463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.813651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.813684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.813876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.813909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.814043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.814078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.814269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.145 [2024-11-19 11:38:58.814303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.145 qpair failed and we were unable to recover it. 00:27:45.145 [2024-11-19 11:38:58.814427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.814460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.814721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.814754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.814997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.815031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.815136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.815171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.815304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.815337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.815451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.815484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.815697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.815731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.815903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.815937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.816190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.816223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.816326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.816360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.816490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.816522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.816722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.816756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.816945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.817002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.817122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.817156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.817394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.817427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.817641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.817675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.817848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.817881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.818058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.818092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.818277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.818310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.818495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.818528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.818700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.818733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.818924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.818966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.819094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.819127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.819233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.819271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.819447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.819480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.819665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.819700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.819911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.819944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.820163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.820198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.820458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.820491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.820677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.820711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.820954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.820988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.821179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.821213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.821343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.821375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.821616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.821649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.821824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.821857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.821975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.822010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.822137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.822170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.822367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.822401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.822524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.822556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.822793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.822826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.822965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.823000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.823179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.823211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.823401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.823434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.823688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.823721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.823848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.823881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.824121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.824156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.824329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.824361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.146 qpair failed and we were unable to recover it. 00:27:45.146 [2024-11-19 11:38:58.824480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.146 [2024-11-19 11:38:58.824513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.824633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.824667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.824777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.824810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.824932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.824990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.825228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.825261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.825449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.825482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.825674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.825708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.825889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.825921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.826171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.826205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.826465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.826498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.826715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.826746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.826933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.826976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.827151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.827183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.827454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.827487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.827593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.827625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.827767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.827801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.828001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.828043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.828229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.828262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.828447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.828480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.828657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.828690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.828932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.828973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.829191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.829224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.829347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.829381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.829561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.829595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.829766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.829799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.829983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.830017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.830253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.830286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.830411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.830443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.830652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.830687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.830883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.830917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.831148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.831183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.831317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.831351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.831466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.831499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.831734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.831769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.832054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.832088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.832266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.832299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.832476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.832508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.832770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.832804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.833000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.833035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.833247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.833281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.833383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.833416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.833681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.833715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.833854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.833888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.834143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.834177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.834359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.834392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.834598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.834631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.834746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.147 [2024-11-19 11:38:58.834780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.147 qpair failed and we were unable to recover it. 00:27:45.147 [2024-11-19 11:38:58.834891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.834924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.835037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.835071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.835258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.835291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.835531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.835563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.835768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.835799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.835899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.835932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.836112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.836146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.836384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.836418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.836603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.836636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.836848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.836888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.837159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.837194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.837370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.837403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.837667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.837700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.837835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.837867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.837996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.838030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.838235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.838268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.838440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.838474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.838714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.838748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.838934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.838975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.839103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.839136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.839272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.839304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.839491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.839525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.839712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.839745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.839962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.839996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.840178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.840211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.840393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.840426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.840547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.840580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.840773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.840807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.841045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.841078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.841195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.841228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.841361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.841394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.841512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.841546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.841724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.841756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.841963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.841997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.842172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.842205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.842326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.842359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.842592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.842625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.842877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.842911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.843108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.843142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.843412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.843445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.843565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.843599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.148 [2024-11-19 11:38:58.843743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.148 [2024-11-19 11:38:58.843776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.148 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.843960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.843995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.844232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.844267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.844398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.844431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.844613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.844646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.844833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.844867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.845040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.845075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.845267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.845300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.845497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.845535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.845711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.845744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.846000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.846034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.846155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.846188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.846361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.846393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.846564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.846597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.846887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.846919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.847116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.847150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.847271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.847305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.847487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.847520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.847701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.847733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.847973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.848007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.848178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.848212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.848334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.848367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.848499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.848532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.848635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.848667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.848871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.848904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.849103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.849137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.849330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.849364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.849532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.849564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.849683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.849716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.849834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.849866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.850106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.850141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.850268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.850302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.850475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.850508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.850702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.850734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.850858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.850892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.851116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.851150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.851334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.851367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.851499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.851531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.851647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.851680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.851809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.851841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.851966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.852001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.852110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.852142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.852309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.852343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.852516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.852548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.852721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.852754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.852944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.853007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.853122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.853155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.853330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.149 [2024-11-19 11:38:58.853364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.149 qpair failed and we were unable to recover it. 00:27:45.149 [2024-11-19 11:38:58.853502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.853542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.853743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.853776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.853892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.853925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.854112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.854145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.854255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.854288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.854410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.854443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.854704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.854737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.854862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.854895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.855028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.855061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.855320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.855354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.855563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.855596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.855713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.855745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.855857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.855889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.856092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.856126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.856261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.856293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.856470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.856503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.856614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.856647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.856884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.856917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.857103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.857137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.857379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.857412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.857532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.857564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.857681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.857714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.857900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.857933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.858062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.858095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.858295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.858327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.858438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.858471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.858746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.858778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.858891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.858924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.859108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.859142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.859334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.859367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.859494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.859527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.859645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.859677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.859970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.860005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.860189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.860222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.860404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.860436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.860556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.860589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.860855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.860887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.861076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.861109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.861288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.861320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.861436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.861469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.861598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.861637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.861754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.861786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.861962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.861996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.862172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.862204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.862383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.862415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.862632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.862666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.150 qpair failed and we were unable to recover it. 00:27:45.150 [2024-11-19 11:38:58.862835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.150 [2024-11-19 11:38:58.862868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.863048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.863083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.863320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.863352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.863478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.863511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.863690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.863723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.863848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.863880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.865281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.865342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.865661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.865697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.865893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.865926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.866083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.866119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.866290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.866323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.866520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.866553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.866661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.866693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.866811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.866843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.866986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.867020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.867197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.867229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.867500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.867533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.867646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.867678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.867871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.867903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.868175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.868209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.868330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.868372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.868496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.868529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.868703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.868735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.868862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.868894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.869098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.869132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.869375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.869407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.869532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.869565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.869693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.869724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.869839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.869871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.870058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.870092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.870219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.870251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.870362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.870394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.870596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.870629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.870760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.870792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.870998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.871038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.871218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.871251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.871433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.871464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.871636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.871668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.871967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.872004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.872131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.872162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.872400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.872432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.872593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.872627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.872761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.872792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.872908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.872939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.873130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.873163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.873300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.873331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.873467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.873500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.151 [2024-11-19 11:38:58.873615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.151 [2024-11-19 11:38:58.873648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.151 qpair failed and we were unable to recover it. 00:27:45.152 [2024-11-19 11:38:58.873841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.152 [2024-11-19 11:38:58.873873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.152 qpair failed and we were unable to recover it. 00:27:45.152 [2024-11-19 11:38:58.874083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.152 [2024-11-19 11:38:58.874117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.152 qpair failed and we were unable to recover it. 00:27:45.152 [2024-11-19 11:38:58.874229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.152 [2024-11-19 11:38:58.874262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.152 qpair failed and we were unable to recover it. 00:27:45.152 [2024-11-19 11:38:58.874435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.152 [2024-11-19 11:38:58.874467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.152 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.874710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.874743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.874877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.874910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.875040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.875074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.875248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.875279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.875556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.875590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.875694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.875726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.875907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.875939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.876142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.876175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.876304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.876337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.876586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.876656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.876819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.876857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.876983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.877020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.877159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.877192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.443 [2024-11-19 11:38:58.877367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.443 [2024-11-19 11:38:58.877400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.443 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.877521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.877554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.877663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.877695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.877868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.877901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.878158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.878193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.878391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.878423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.878611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.878644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.878772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.878805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.878919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.878960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.879145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.879179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.879306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.879337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.879446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.879478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.879766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.879798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.879971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.880005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.880179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.880211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.880401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.880433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.880700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.880732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.880843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.880876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.880992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.881029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.881162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.881194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.881376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.881406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.881641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.881674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.881802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.881834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.882003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.882044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.882247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.882279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.882459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.882490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.882674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.882706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.882835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.882867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.882993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.883026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.883138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.883169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.883298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.883333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.883515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.883547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.883671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.883704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.883815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.883848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.883990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.884023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.884235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.884268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.884446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.884479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.884680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.884712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.444 qpair failed and we were unable to recover it. 00:27:45.444 [2024-11-19 11:38:58.884883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.444 [2024-11-19 11:38:58.884915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.885050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.885084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.885264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.885296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.885480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.885514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.885632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.885665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.885835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.885867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.886052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.886085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.886194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.886226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.886339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.886372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.886501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.886535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.886722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.886755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.886930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.886972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.887082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.887121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.887242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.887274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.887396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.887428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.887535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.887567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.887696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.887728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.887901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.887934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.888126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.888159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.888282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.888315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.888435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.888468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.888639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.888672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.888802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.888834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.889143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.889178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.889351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.889382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.889506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.889538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.889737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.889769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.889940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.889982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.890159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.890191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.890300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.890332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.890447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.890478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.890608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.890640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.890760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.890791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.890976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.891009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.891270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.891302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.891475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.891506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.891696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.891729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.891915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.891956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.445 qpair failed and we were unable to recover it. 00:27:45.445 [2024-11-19 11:38:58.892084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.445 [2024-11-19 11:38:58.892116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.892224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.892258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.892394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.892425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.892610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.892642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.894189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.894241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.894464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.894506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.894774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.894806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.894991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.895024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.895213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.895245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.895416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.895448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.895567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.895598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.895710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.895741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.895884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.895916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.896068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.896101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.896240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.896272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.896516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.896560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.896692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.896721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.896911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.896944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.897271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.897303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.897419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.897452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.897670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.897703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.897818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.897850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.898027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.898060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.898208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.898241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.898353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.898384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.898575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.898607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.898723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.898755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.898939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.898983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.899157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.899189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.899384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.899418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.899544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.899575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.899744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.899775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.899960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.899994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.900105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.900137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.900320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.900353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.900626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.900660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.900796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.900827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.900981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.901015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.446 [2024-11-19 11:38:58.901273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.446 [2024-11-19 11:38:58.901307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.446 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.901423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.901453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.901570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.901602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.901738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.901772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.902037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.902070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.902198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.902230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.902435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.902468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.902588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.902620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.902827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.902859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.903060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.903095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.903232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.903263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.903431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.903463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.903734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.903773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.903901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.903933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.904062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.904095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.904206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.904241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.904499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.904531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.904648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.904680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.904916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.904993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.905197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.905232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.905422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.905455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.905641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.905673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.905797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.905828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.906030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.906065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.906183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.906214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.906469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.906501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.906744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.906776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.906887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.906919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.907050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.907082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.907249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.907282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.907460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.907492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.907745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.907786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.908003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.908036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.908143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.908175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.908302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.908334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.908573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.908604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.908725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.908757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.908880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.447 [2024-11-19 11:38:58.908911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.447 qpair failed and we were unable to recover it. 00:27:45.447 [2024-11-19 11:38:58.909058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.909091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.909208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.909240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.909356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.909388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.909596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.909627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.909751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.909783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.909906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.909937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.910127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.910159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.910370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.910401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.910518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.910551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.910736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.910767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.910887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.910918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.911152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.911224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.911441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.911478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.911651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.911685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.911860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.911891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.912108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.912143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.912288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.912321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.912491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.912523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.912736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.912769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.913011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.913043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.913180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.913214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.913405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.913436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.913612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.913645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.913854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.913884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.914020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.914053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.914158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.914189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.914319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.914351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.914462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.914494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.914661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.914693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.914865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.914896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.915023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.915056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.915229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.448 [2024-11-19 11:38:58.915261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.448 qpair failed and we were unable to recover it. 00:27:45.448 [2024-11-19 11:38:58.915431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.915462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.915634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.915665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.915789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.915820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.916004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.916038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.916300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.916332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.916465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.916496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.916620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.916651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.916772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.916804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.916915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.916953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.917091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.917124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.917299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.917330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.917441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.917473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.917602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.917634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.917748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.917780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.917976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.918009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.918137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.918170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.918300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.918332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.918441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.918473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.918603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.918634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.918812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.918844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.918960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.918992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.919170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.919203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.919387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.919418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.919534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.919567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.919697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.919729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.919847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.919878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.919988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.920020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.920132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.920163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.920427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.920465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.920582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.920613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.920795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.920826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.920930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.920972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.921210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.921242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.921417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.921449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.921564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.921596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.921769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.921800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.921921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.921969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.449 qpair failed and we were unable to recover it. 00:27:45.449 [2024-11-19 11:38:58.922084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.449 [2024-11-19 11:38:58.922115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.922217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.922248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.922423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.922455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.922629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.922660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.922771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.922802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.922924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.922964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.923103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.923134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.923257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.923288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.923422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.923453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.923637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.923668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.923774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.923806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.923979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.924012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.924117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.924148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.924253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.924286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.924536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.924567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.924753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.924784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.924893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.924924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.925104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.925136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.925262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.925293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.925399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.925430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.925556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.925587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.925765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.925796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.925988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.926021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.926142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.926173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.926356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.926387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.926501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.926532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.926671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.926703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.926876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.926907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.927026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.927058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.927165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.927197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.927371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.927402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.927522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.927559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.927679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.927709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.927895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.927926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.928110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.928142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.928263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.928293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.928401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.928432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.928531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.450 [2024-11-19 11:38:58.928562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.450 qpair failed and we were unable to recover it. 00:27:45.450 [2024-11-19 11:38:58.928735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.928765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.928891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.928922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.929055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.929087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.929279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.929311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.929414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.929445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.929651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.929682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.929787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.929818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.930024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.930056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.930175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.930206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.930391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.930423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.930536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.930567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.930736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.930768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.930889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.930927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.931069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.931101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.931277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.931308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.931424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.931454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.931628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.931659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.931778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.931809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.931929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.931969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.932147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.932179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.932297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.932327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.932510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.932541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.932654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.932685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.932856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.932887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.933062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.933094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.933284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.933314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.933444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.933480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.933594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.933626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.933726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.933756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.933868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.933900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.934103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.934135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.934242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.934272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.934393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.934424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.934532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.934568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.934686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.934715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.934836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.934866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.934977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.935009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.935186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.451 [2024-11-19 11:38:58.935217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.451 qpair failed and we were unable to recover it. 00:27:45.451 [2024-11-19 11:38:58.935325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.935356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.935468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.935498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.935620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.935650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.935781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.935811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.935931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.935970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.936086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.936116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.936230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.936260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.936367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.936398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.936499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.936529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.936708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.936739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.936862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.936892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.937090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.937121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.937229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.937261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.937435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.937467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.937575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.937606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.937722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.937752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.937882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.937913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.938132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.938163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.938337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.938369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.938493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.938525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.938624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.938654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.938765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.938796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.938910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.938940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.939066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.939097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.939263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.939295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.939395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.939425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.939615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.939647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.939765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.939796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.939969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.940001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.940187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.940218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.940458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.940489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.940595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.940626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.452 qpair failed and we were unable to recover it. 00:27:45.452 [2024-11-19 11:38:58.940752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.452 [2024-11-19 11:38:58.940784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.940901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.940931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.941049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.941081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.941195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.941231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.941336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.941367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.941484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.941515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.941742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.941774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.941900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.941931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.942081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.942114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.942232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.942262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.942447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.942479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.942585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.942615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.942785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.942816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.942935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.942979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.943099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.943130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.943262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.943292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.943395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.943427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.943604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.943634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.943739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.943770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.943874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.943904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.944173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.944205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.944312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.944352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.944522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.944550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.944715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.944743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.944927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.944970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.945095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.945125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.945254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.945285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.945394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.945426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.945613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.945642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.945760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.945792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.945900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.945930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.946120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.946148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.946338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.946366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.946467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.946495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.946605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.946633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.946802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.946831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.453 qpair failed and we were unable to recover it. 00:27:45.453 [2024-11-19 11:38:58.946997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.453 [2024-11-19 11:38:58.947028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.947143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.947171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.947286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.947315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.947488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.947515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.947629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.947658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.947824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.947852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.947964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.947992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.948172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.948206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.948330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.948359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.948528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.948556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.948649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.948677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.948787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.948814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.948916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.948944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.949048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.949076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.949179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.949207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.949329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.949358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.949468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.949496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.949594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.949622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.949719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.949747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.949863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.949892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.950003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.950032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.950135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.950164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.950334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.950362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.950470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.950498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.950674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.950703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.950820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.950848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.950969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.950998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.951100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.951127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.951256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.951284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.951397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.951426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.951532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.951559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.951682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.951711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.951812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.951839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.952108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.952139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.952279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.952307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.952403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.952431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.952537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.952564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.952672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.454 [2024-11-19 11:38:58.952700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.454 qpair failed and we were unable to recover it. 00:27:45.454 [2024-11-19 11:38:58.952819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.952847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.953040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.953070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.953178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.953205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.953315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.953344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.953456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.953483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.953717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.953746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.953854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.953881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.953993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.954023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.954199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.954230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.954345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.954377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.954478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.954504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.954630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.954657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.954760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.954787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.954903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.954932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.955178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.955207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.955316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.955344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.955443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.955470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.955570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.955600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.955703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.955730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.955895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.955924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.956031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.956059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.956227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.956255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.956447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.956476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.956659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.956688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.956871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.956899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.957095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.957125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.957240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.957267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.957366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.957395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.957505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.957533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.957646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.957675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.957769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.957797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.957900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.957929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.958214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.958243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.958350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.958379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.958494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.958522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.958645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.958674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.958848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.958875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.958974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.455 [2024-11-19 11:38:58.959004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.455 qpair failed and we were unable to recover it. 00:27:45.455 [2024-11-19 11:38:58.959101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.959128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.959250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.959278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.959467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.959496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.959599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.959627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.959726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.959754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.959868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.959896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.960003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.960031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.960197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.960226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.960401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.960428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.960540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.960568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.960687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.960714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.960959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.960996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.961093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.961121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.961286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.961315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.961479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.961508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.961672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.961700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.961797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.961826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.961929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.961965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.962076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.962104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.962269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.962298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.962475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.962502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.962758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.962786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.962884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.962911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.963039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.963069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.963303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.963331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.963449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.963476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.963751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.963779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.963908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.963936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.964108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.964137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.964237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.964264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.964362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.964391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.964562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.964591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.964766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.964794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.964970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.965000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.965281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.965308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.965415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.965443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.965611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.965638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.456 [2024-11-19 11:38:58.965805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.456 [2024-11-19 11:38:58.965833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.456 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.966084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.966155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.966367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.966405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.966521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.966554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.966749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.966781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.966966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.967000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.967185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.967219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.967389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.967420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.967589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.967624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.967728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.967760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.967958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.967992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.968229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.968261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.968369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.968402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.968639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.968670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.968853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.968907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.969099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.969132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.969259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.969290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.969491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.969522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.969703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.969735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.969915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.969965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.970148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.970181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.970308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.970341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.970454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.970485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.970733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.970764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.970881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.970914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.971045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.971079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.971191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.971224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.971327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.971358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.971493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.971525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.971722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.971755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.971867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.971899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.972039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.972073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.972330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.972362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.457 [2024-11-19 11:38:58.972477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.457 [2024-11-19 11:38:58.972509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.457 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.972626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.972658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.972776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.972808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.973024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.973057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.973233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.973265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.973382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.973415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.973583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.973614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.973799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.973831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.974011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.974045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.974165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.974197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.974386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.974418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.974546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.974578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.974752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.974784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.974984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.975017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.975131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.975164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.975290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.975321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.975440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.975473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.975673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.975705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.975815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.975846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.975967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.976000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.976185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.976216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.976404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.976442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.976544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.976575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.976748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.976780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.976915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.976958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.977072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.977105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.977297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.977328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.977438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.977470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.977581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.977614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.977732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.977765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.977883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.977914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.978099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.978133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.978304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.978336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.978532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.978565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.978748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.978780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.978898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.978930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.979048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.979079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.979263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.979295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.458 qpair failed and we were unable to recover it. 00:27:45.458 [2024-11-19 11:38:58.979414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.458 [2024-11-19 11:38:58.979447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.979619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.979650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.979819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.979852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.980092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.980126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.980310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.980343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.980470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.980502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.980718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.980750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.980941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.980984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.981108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.981140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.981245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.981276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.981445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.981516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.981673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.981709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.981816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.981850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.981982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.982015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.982121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.982153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.982329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.982360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.982477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.982509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.982629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.982660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.982834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.982865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.983004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.983037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.983210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.983243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.983370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.983402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.983615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.983647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.983765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.983806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.983999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.984033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.984212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.984245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.984364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.984396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.984509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.984541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.984716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.984748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.984873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.984905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.985038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.985071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.985187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.985219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.985331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.985362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.985544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.985577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.985820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.985852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.986053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.986086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.986275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.986307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.986513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.459 [2024-11-19 11:38:58.986545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.459 qpair failed and we were unable to recover it. 00:27:45.459 [2024-11-19 11:38:58.986745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.986778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.986974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.987007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.987113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.987145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.987332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.987365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.987485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.987516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.987703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.987736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.987922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.987964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.988159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.988192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.988306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.988338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.988597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.988630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.988746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.988778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.988991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.989024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.989191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.989261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.989472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.989507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.989792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.989824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.990054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.990087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.990215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.990247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.990369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.990400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.990594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.990626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.990736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.990766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.990899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.990930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.991110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.991141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.991404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.991435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.991571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.991602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.991723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.991755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.991884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.991925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.992121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.992152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.992324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.992356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.992479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.992512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.992713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.992744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.992922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.992965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.993163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.993195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.993364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.993396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.993530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.993562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.993682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.993714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.993832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.993865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.993978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.994011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.994197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.460 [2024-11-19 11:38:58.994229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.460 qpair failed and we were unable to recover it. 00:27:45.460 [2024-11-19 11:38:58.994410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.994441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.994624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.994656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.994776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.994805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.994921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.994961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.995142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.995173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.995435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.995467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.995762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.995794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.995926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.995965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.996083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.996114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.996291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.996321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.996435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.996465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.996582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.996613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.996783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.996814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.996996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.997029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.997191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.997263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.997524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.997560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.997763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.997796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.997908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.997940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.998143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.998176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.998346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.998378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.998520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.998552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.998676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.998708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.998902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.998935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.999139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.999173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.999297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.999329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.999432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.999465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.999647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.999679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:58.999862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:58.999894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:59.000091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:59.000126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:59.000238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:59.000270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:59.000443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:59.000476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:59.000656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:59.000687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:59.000894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:59.000927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:59.001111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:59.001144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:59.001385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:59.001418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:59.001591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:59.001622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:59.001792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:59.001824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.461 [2024-11-19 11:38:59.001945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.461 [2024-11-19 11:38:59.001988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.461 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.002232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.002266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.002380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.002411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.002530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.002563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.002690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.002728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.002925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.002970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.003089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.003122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.003234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.003266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.003363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.003395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.003590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.003622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.003742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.003773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.003971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.004005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.004116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.004148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.004258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.004289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.004473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.004505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.004645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.004678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.004848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.004879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.005085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.005119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.005263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.005296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.005399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.005431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.005554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.005586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.005706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.005738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.005915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.005971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.006100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.006132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.006246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.006277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.006482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.006515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.006631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.006662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.462 [2024-11-19 11:38:59.006780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.462 [2024-11-19 11:38:59.006813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.462 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.006987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.007021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.007148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.007181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.007302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.007333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.007500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.007533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.007751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.007783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.007909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.007941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.008078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.008112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.008288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.008320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.011090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.011128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.011244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.011274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.011463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.011502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.011745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.011777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.011919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.011959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.012075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.012108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.012308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.012339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.012451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.012483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.012670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.012701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.012829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.012866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.012984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.013018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.013147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.013179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.013285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.013316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.013429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.013462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.013595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.013627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.013745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.013777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.013883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.013914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.014192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.014226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.014338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.014369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.014494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.014527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.014698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.014730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.014909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.014941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.015071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.015103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.015239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.015272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.015395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.015427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.015555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.015587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.015769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.015801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.015906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.015938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.016138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.463 [2024-11-19 11:38:59.016170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.463 qpair failed and we were unable to recover it. 00:27:45.463 [2024-11-19 11:38:59.016287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.016319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.016494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.016525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.016703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.016734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.016870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.016902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.017035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.017068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.017172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.017204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.017456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.017487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.017603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.017641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.017744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.017775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.017881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.017913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.018110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.018144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.018333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.018366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.018489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.018520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.018641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.018674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.018791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.018823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.019068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.019102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.019279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.019311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.019426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.019458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.019663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.019695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.019832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.019865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.019989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.020022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.020155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.020189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.020299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.020330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.020465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.020498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.020602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.020634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.020806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.020839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.020945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.020985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.021230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.021262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.021369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.021400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.021518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.021549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.021787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.021819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.021994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.022026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.022266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.022298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.022421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.022452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.022577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.022610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.022794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.022826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.022942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.022987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.023097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.023129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.464 qpair failed and we were unable to recover it. 00:27:45.464 [2024-11-19 11:38:59.023300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.464 [2024-11-19 11:38:59.023332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.023501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.023533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.023648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.023681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.023858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.023888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.024015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.024049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.024236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.024268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.024396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.024427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.024539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.024571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.024695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.024726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.024900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.024932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.025064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.025102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.025232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.025264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.025382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.025414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.025542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.025573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.025683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.025715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.025818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.025851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.025983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.026016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.026122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.026153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.026327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.026359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.026465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.026496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.026626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.026658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.026763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.026794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.027015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.027048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.027154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.027186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.027321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.027353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.027524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.027555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.027658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.027689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.027800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.027832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.027940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.027983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.028107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.028138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.028311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.028343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.028455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.028488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.028593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.028624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.028802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.028834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.029025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.029058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.029233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.029264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.029382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.029414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.029604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.029642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.465 [2024-11-19 11:38:59.029758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.465 [2024-11-19 11:38:59.029789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.465 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.029909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.029940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.030068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.030101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.030209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.030240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.030346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.030377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.030504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.030535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.030650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.030683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.030799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.030830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.030965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.031000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.031118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.031149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.031263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.031295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.031508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.031539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.031722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.031754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.031980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.032052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.032256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.032293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.032409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.032441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.032626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.032657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.032844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.032874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.032977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.033010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.033118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.033149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.033262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.033292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.033407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.033438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.033627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.033657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.033831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.033862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.034039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.034071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.034246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.034279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.034453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.034493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.034605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.034636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.034834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.034865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.035054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.035086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.035196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.035227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.035361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.035391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.035502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.035534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.035715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.466 [2024-11-19 11:38:59.035746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.466 qpair failed and we were unable to recover it. 00:27:45.466 [2024-11-19 11:38:59.035917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.035960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.036165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.036197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.036300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.036332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.036441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.036472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.036605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.036637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.036753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.036784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.036896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.036926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.037127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.037160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.037342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.037373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.037555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.037586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.037701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.037732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.037913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.037945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.038084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.038115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.038243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.038275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.038402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.038433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.038612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.038643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.038760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.038790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.038888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.038920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.039062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.039097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.039218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.039250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.039426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.039458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.039643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.039674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.039787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.039819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.039933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.039974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.040230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.040261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.040392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.040423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.040533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.040565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.040744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.040774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.040973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.041007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.041127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.041160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.041266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.041296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.041478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.041509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.041624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.041656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.041770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.041802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.041922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.041962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.042088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.042122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.042306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.042338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.042452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.467 [2024-11-19 11:38:59.042485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.467 qpair failed and we were unable to recover it. 00:27:45.467 [2024-11-19 11:38:59.042602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.042635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.042748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.042780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.042991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.043024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.043134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.043167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.043344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.043377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.043508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.043540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.043674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.043706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.043828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.043860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.044122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.044161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.044282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.044315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.044440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.044472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.044599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.044631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.044866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.044899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.045026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.045061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.045301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.045332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.045452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.045483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.045602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.045634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.045820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.045852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.046025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.046059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.046249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.046282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.046387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.046419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.046530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.046562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.046677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.046710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.046821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.046854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.047029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.047062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.047188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.047220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.047323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.047355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.047481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.047513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.047710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.047741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.047930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.047971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.048081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.048113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.048227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.048258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.048430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.048462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.048641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.048673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.048913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.048946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.049073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.049105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.049228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.049260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.049446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.049478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.468 qpair failed and we were unable to recover it. 00:27:45.468 [2024-11-19 11:38:59.049685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.468 [2024-11-19 11:38:59.049718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.049835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.049869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.050050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.050083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.050214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.050246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.050447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.050479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.050604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.050636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.050758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.050789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.050985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.051019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.051145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.051178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.051287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.051319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.051501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.051532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.051694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.051764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.051978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.052018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.052224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.052258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.052384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.052414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.052605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.052637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.052824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.052855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.053035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.053069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.053318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.053349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.053459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.053491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.053597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.053627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.053739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.053772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.053879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.053910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.054107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.054139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.054262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.054303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.054622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.054653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.054776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.054808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.054922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.054972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.055164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.055195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.055301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.055332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.055436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.055467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.055704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.055734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.469 [2024-11-19 11:38:59.055926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.469 [2024-11-19 11:38:59.055971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.469 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.056090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.056121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.056252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.056283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.056468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.056498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.056631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.056663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.056864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.056897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.057104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.057137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.057329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.057362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.057564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.057595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.057728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.057760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.057960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.057994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.058121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.058154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.058280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.058312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.058501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.058533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.058654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.058685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.058859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.058892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.059032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.059065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.059186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.059217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.059331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.059364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.059598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.059668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.059882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.059919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.060072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.060108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.060226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.060259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.060401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.060433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.060548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.060580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.060694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.060725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.060990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.470 [2024-11-19 11:38:59.061024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.470 qpair failed and we were unable to recover it. 00:27:45.470 [2024-11-19 11:38:59.061313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.061346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.061470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.061502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.061695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.061726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.061865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.061897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.062102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.062135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.062279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.062322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.062536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.062567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.062679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.062711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.062889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.062922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.063064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.063096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.063277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.063309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.063422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.063454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.063577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.063611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.063734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.063767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.063877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.063908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.064020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.064053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.064170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.064201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.064313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.064344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.064528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.064559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.064737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.064769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.065024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.065058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.065320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.065351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.065463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.065495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.065613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.065644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.065759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.065790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.065906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.471 [2024-11-19 11:38:59.065938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.471 qpair failed and we were unable to recover it. 00:27:45.471 [2024-11-19 11:38:59.066070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.066102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.066284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.066315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.066440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.066472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.066673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.066705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.066830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.066862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.067045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.067077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.067193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.067232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.067337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.067368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.067554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.067586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.067792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.067824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.067936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.067979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.068169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.068201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.068326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.068358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.068537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.068567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.068676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.068707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.068917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.068961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.069088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.069121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.069238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.069270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.069378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.069409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.069676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.069709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.069845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.069877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.069997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.070030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.070158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.070189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.070368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.070399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.070525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.070556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.070678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.070709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.070826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.070858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.071037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.071070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.071311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.071343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.071610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.071642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.071765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.071798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.472 [2024-11-19 11:38:59.071984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.472 [2024-11-19 11:38:59.072018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.472 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.072133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.072165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.072281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.072314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.072441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.072473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.072672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.072704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.072910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.072942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.073217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.073249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.073366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.073397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.073522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.073555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.073676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.073708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.073890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.073922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.074121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.074154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.074261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.074292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.074405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.074437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.074550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.074581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.074770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.074806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.074922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.074961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.075135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.075168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.075349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.075381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.075555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.075587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.075695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.075728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.075831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.075863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.075972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.076006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.076111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.076143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.076251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.076283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.076407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.076438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.076551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.076584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.076773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.076804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.076979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.473 [2024-11-19 11:38:59.077012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.473 qpair failed and we were unable to recover it. 00:27:45.473 [2024-11-19 11:38:59.077122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.077155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.077280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.077312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.077496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.077527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.077637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.077670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.077791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.077822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.077958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.077992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.078116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.078148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.078252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.078286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.078398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.078430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.078571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.078603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.078713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.078745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.078863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.078895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.079077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.079109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.079307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.079339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.079515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.079548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.079653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.079684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.079851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.079883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.080062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.080096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.080202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.080235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.080353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.080384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.080559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.080591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.080793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.080825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.080999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.081032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.081157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.081190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.081372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.081404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.081525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.081557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.081725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.081762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.081880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.081913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.474 qpair failed and we were unable to recover it. 00:27:45.474 [2024-11-19 11:38:59.082107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.474 [2024-11-19 11:38:59.082140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.082383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.082416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.082524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.082556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.082677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.082710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.082815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.082847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.082976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.083008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.083197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.083229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.083362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.083394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.083568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.083599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.083783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.083814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.083964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.083997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.084189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.084222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.084356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.084388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.084492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.084523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.084660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.084692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.084826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.084859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.084998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.085031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.085153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.085186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.085292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.085324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.085500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.085532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.085653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.085688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.085860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.085892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.086022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.086054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.086233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.086267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.086452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.086482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.086663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.086693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.086792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.086822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.475 qpair failed and we were unable to recover it. 00:27:45.475 [2024-11-19 11:38:59.086919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.475 [2024-11-19 11:38:59.086978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.087088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.087116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.087285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.087314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.087427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.087457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.087564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.087594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.087696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.087725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.087827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.087857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.087980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.088012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.088174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.088202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.088300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.088330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.088505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.088535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.088701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.088735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.088851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.088880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.088982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.089013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.089125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.089154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.089328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.089357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.089488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.089518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.089705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.089734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.089912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.089941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.090080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.090110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.090247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.090276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.090395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.090424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.090591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.090620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.090740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.090769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.090865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.090893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.091011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.091041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.091219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.091248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.091426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.091454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.091557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.091586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.091700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.476 [2024-11-19 11:38:59.091729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.476 qpair failed and we were unable to recover it. 00:27:45.476 [2024-11-19 11:38:59.091832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.091861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.092030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.092060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.092179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.092208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.092301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.092331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.092503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.092531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.092695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.092725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.092853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.092881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.092994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.093025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.093138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.093168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.093422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.093451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.093557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.093587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.093688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.093717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.093895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.093923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.094109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.094139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.094384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.094414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.094511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.094540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.094642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.094671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.094856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.094884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.095004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.095036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.095203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.095232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.095359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.095388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.095501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.095535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.095654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.095683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.095805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.095834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.095936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.095974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.096078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.096107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.477 [2024-11-19 11:38:59.096213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.477 [2024-11-19 11:38:59.096241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.477 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.096339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.096367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.096495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.096524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.096695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.096724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.096924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.096966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.097146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.097175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.097350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.097379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.097558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.097587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.097701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.097730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.097872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.097901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.098089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.098118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.098239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.098268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.098382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.098412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.098648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.098676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.098839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.098868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.099037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.099069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.099317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.099346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.099463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.099493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.099683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.099712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.099826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.099855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.099971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.100001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.100112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.100142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.100258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.100287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.100462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.100491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.478 qpair failed and we were unable to recover it. 00:27:45.478 [2024-11-19 11:38:59.100611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.478 [2024-11-19 11:38:59.100640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.100753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.100783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.100885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.100914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.101108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.101138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.101374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.101404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.101514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.101542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.101738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.101766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.101936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.101972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.102141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.102169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.102276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.102305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.102422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.102452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.102646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.102680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.102847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.102875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.103109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.103141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.103308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.103337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.103449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.103479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.103640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.103669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.103874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.103902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.104010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.104039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.104149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.104178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.104294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.104322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.104570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.104599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.104729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.104758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.104934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.104971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.105204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.105233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.105477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.105508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.105714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.105743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.105920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.105966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.106145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.106174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.479 [2024-11-19 11:38:59.106282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.479 [2024-11-19 11:38:59.106313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.479 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.106488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.106517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.106684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.106713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.106896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.106928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.107054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.107087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.107192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.107224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.107349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.107381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.107483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.107515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.107619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.107652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.107826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.107859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.108000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.108033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.108146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.108177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.108354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.108386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.108503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.108535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.108636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.108667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.108879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.108911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.109090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.109123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.109242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.109273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.109446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.109477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.109602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.109633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.109845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.109877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.110014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.110047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.110260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.110298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.110422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.110453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.110646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.110678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.110858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.110890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.111033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.111066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.111173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.111204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.480 qpair failed and we were unable to recover it. 00:27:45.480 [2024-11-19 11:38:59.111385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.480 [2024-11-19 11:38:59.111417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.111540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.111572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.111676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.111707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.111824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.111854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.111980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.112015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.112132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.112163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.112338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.112370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.112562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.112594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.112705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.112736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.112847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.112879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.113065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.113099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.113291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.113323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.113433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.113465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.113651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.113683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.113800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.113833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.113958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.113991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.114171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.114204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.114318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.114351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.114466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.114498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.114751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.114783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.115023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.115056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.115171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.115203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.115323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.115356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.115591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.115623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.115738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.115771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.115946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.115989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.116229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.116261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.116452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.116484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.116605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.116637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.116760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-11-19 11:38:59.116792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.481 qpair failed and we were unable to recover it. 00:27:45.481 [2024-11-19 11:38:59.116895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.116928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.117042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.117074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.117185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.117217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.117335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.117367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.117551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.117589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.117852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.117882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.118032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.118065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.118257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.118290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.118407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.118438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.118577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.118609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.118789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.118821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.118935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.118978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.119155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.119186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.119359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.119391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.119506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.119537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.119642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.119674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.119788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.119820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.119946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.119986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.120193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.120225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.120400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.120431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.120611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.120643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.120777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.120808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.120914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.120946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.121194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.121225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.121333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.121364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.121472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.121504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.482 qpair failed and we were unable to recover it. 00:27:45.482 [2024-11-19 11:38:59.121621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-11-19 11:38:59.121653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.121761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.121793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.121894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.121924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.122140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.122174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.122353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.122386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.122622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.122694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.122874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.122911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.123043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.123079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.123272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.123304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.123424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.123455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.123559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.123591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.123706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.123738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.123850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.123881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.124055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.124090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.124344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.124376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.124563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.124595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.124714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.124746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.124938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.124981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.125106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.125138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.125281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.125314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.125519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.125551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.125651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.125683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.125813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.125844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.125962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.125996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.126238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.126270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.126390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.126421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.126661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.126693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-19 11:38:59.126928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.483 [2024-11-19 11:38:59.126970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.127181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.127213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.127332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.127363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.127556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.127588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.127708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.127739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.127869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.127907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.128044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.128078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.128249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.128281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.128397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.128429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.128552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.128585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.128769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.128802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.128991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.129025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.129139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.129170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.129363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.129394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.129508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.129540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.129712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.129744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.130008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.130041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.130170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.130202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.130376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.130408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.130522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.130554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.130701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.130733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.130915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.130956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-19 11:38:59.131199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.484 [2024-11-19 11:38:59.131230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.131410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.131442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.131565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.131597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.131726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.131758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.131938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.131980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.132175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.132206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.132321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.132352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.132472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.132503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.132644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.132676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.132784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.132815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.132991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.133030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.133216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.133250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.133350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.133381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.133506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.133537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.133652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.133684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.133810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.133841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.134024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.134058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.134177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.134209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.134316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.134347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.134586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.134617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.134859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.134891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.135020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.135052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.135235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.135267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.135460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.135492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.135717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.135792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.136021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.136060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.136189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.136224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.136335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.136367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-19 11:38:59.136483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.485 [2024-11-19 11:38:59.136516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.136706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.136737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.136866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.136898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.137055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.137088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.137269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.137301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.137424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.137456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.137624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.137656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.137830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.137861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.137997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.138032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.138145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.138186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.138360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.138391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.138501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.138533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.138708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.138741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.138851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.138883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.139005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.139039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.139305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.139337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.139488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.139521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.139780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.139812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.139961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.139994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.140105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.140137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.140244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.140277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.140393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.140426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.140530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.140562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.140684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.140717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.140927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.140971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.141086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.141119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.141347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.486 [2024-11-19 11:38:59.141380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.486 qpair failed and we were unable to recover it. 00:27:45.486 [2024-11-19 11:38:59.141560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.141593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.141720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.141753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.141858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.141890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.142079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.142113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.142291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.142322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.142533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.142567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.142688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.142720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.142835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.142868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.142999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.143032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.143195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.143266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.143530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.143567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.143758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.143791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.143912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.143944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.144152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.144186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.144304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.144337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.144456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.144487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.144616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.144650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.144768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.144800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.144910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.144941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.145146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.145179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.145351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.145385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.145568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.145600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.145782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.145824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.145999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.146032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.146140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.146172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.146344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.146376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.146488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.146520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.146653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.146685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.146895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.146927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.487 [2024-11-19 11:38:59.147126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.487 [2024-11-19 11:38:59.147160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.487 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.147349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.147382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.147569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.147603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.147715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.147747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.147854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.147885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.148093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.148127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.148307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.148338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.148473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.148504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.148623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.148656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.148759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.148792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.148966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.149000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.149120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.149153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.149359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.149391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.149563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.149595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.149762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.149795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.149902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.149934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.150064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.150097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.150273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.150305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.150444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.150476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.150669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.150701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.150812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.150846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.151030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.151064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.151258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.151291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.151477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.151509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.151615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.151647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.151884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.151916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.152123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.152155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.152331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.152362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.152557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.152590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.152795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.152826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.153008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.488 [2024-11-19 11:38:59.153042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.488 qpair failed and we were unable to recover it. 00:27:45.488 [2024-11-19 11:38:59.153217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.153249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.153420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.153452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.153693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.153731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.153904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.153936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.154068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.154102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.154344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.154376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.154477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.154509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.154715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.154747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.155010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.155044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.155228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.155261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.155445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.155476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.155596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.155628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.155797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.155830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.156035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.156068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.156250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.156282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.156382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.156414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.156707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.156738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.156941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.156982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.157166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.157199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.157492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.157524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.157715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.157746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.157933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.157976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.158160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.158192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.158376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.158408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.158527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.158559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.158759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.158791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.158982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.159016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.159256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.159289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.159423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.159456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.159750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.159783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.159971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.160005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.160208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.160241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.489 qpair failed and we were unable to recover it. 00:27:45.489 [2024-11-19 11:38:59.160478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.489 [2024-11-19 11:38:59.160510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.160678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.160710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.160827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.160860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.161048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.161081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.161211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.161244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.161371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.161403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.161587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.161620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.161731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.161763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.161961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.161994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.162112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.162144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.162338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.162370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.162640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.162672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.162853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.162887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.163079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.163112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.163214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.163247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.163366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.163398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.163536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.163569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.163739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.163771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.163961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.163995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.164183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.164215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.164327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.164359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.164475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.164508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.164615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.164647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.164897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.164928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.165126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.490 [2024-11-19 11:38:59.165158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.490 qpair failed and we were unable to recover it. 00:27:45.490 [2024-11-19 11:38:59.165277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.165309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.165495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.165528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.165633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.165665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.165945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.166031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.166172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.166208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.166388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.166420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.166542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.166576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.166829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.166861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.167126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.167160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.167286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.167319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.167425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.167456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.167646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.167678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.167814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.167855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.168033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.168065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.168181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.168213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.168481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.168514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.168692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.168723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.168992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.169026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.169264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.169296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.169471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.169503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.169719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.169751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.170015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.170049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.170232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.170263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.170455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.170487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.170659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.170690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.170882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.170913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.171079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.171112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.171218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.171250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.171428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.171460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.171639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.171669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.171841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.171873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.171984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.172017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.491 [2024-11-19 11:38:59.172190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.491 [2024-11-19 11:38:59.172222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.491 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.172459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.172491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.172659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.172691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.172794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.172825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.173007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.173039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.173162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.173193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.173380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.173412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.173595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.173626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.173892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.173923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.174066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.174100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.174278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.174310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.174480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.174511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.174635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.174667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.174853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.174884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.175130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.175163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.175289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.175321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.175511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.175543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.175800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.175832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.176087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.176120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.176298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.176329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.176539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.176572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.176758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.176797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.177007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.177039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.177173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.177205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.177400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.177432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.177626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.177658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.177785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.177816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.178008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.178041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.178157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.178189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.178334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.178366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.178580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.492 [2024-11-19 11:38:59.178612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.492 qpair failed and we were unable to recover it. 00:27:45.492 [2024-11-19 11:38:59.178797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.178829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.179017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.179049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.179190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.179222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.179423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.179455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.179637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.179670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.179915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.179959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.180090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.180122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.180252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.180284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.180468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.180501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.180606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.180637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.180826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.180858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.180981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.181013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.181188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.181219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.181322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.181354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.181554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.181586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.181762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.181794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.182036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.182070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.182188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.182225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.182329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.182362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.182492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.182524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.182725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.182757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.182993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.183026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.183199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.183229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.183344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.183375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.183492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.183523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.183640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.183671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.183782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.183813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.183928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.183966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.184098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.184129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.184253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.184283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.184516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.184548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.184659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.184690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.184812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.184844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.185021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.185054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.493 [2024-11-19 11:38:59.185174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.493 [2024-11-19 11:38:59.185204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.493 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.185326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.185357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.185535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.185566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.185666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.185697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.185824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.185855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.185993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.186027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.186264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.186295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.186406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.186437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.186619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.186651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.186754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.186786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.186918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.186956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.187086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.187118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.187242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.187272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.187445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.187477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.187669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.187701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.187866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.187897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.188081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.188113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.188298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.188330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.188448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.188486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.188591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.188623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.494 [2024-11-19 11:38:59.188802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.494 [2024-11-19 11:38:59.188833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.494 qpair failed and we were unable to recover it. 00:27:45.783 [2024-11-19 11:38:59.190275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.783 [2024-11-19 11:38:59.190328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.783 qpair failed and we were unable to recover it. 00:27:45.783 [2024-11-19 11:38:59.190558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.783 [2024-11-19 11:38:59.190592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.783 qpair failed and we were unable to recover it. 00:27:45.783 [2024-11-19 11:38:59.190701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.783 [2024-11-19 11:38:59.190732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.783 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.190921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.190973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.191101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.191133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.191328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.191359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.191597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.191629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.191751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.191782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.191904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.191935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.192052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.192084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.192275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.192307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.192443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.192474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.192714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.192745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.192874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.192905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.193035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.193067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.193190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.193221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.193396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.193427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.193548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.193580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.193757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.193788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.193907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.193938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.194088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.194120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.194416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.194448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.194620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.194651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.194836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.194867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.195055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.195088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.195201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.195233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.195429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.195461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.195631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.195663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.195779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.195810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.195996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.196028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.196227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.196266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.196440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.196471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.196604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.196637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.196765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.196796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.196902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.196934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.197112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.197144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.197334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.197367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.197484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.197516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.197686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.197719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.197836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.197867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.197987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.198020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.198193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.198225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.784 qpair failed and we were unable to recover it. 00:27:45.784 [2024-11-19 11:38:59.198395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.784 [2024-11-19 11:38:59.198426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.198549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.198581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.198702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.198734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.198911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.198944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.199161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.199193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.199339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.199371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.199473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.199504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.199703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.199735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.199859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.199891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.200022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.200055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.200185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.200217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.200325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.200358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.200546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.200578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.200681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.200712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.200814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.200848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.201018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.201051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.201177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.201208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.201377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.201411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.201525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.201557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.201673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.201704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.201819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.201851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.201980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.202012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.202185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.202217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.202320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.202352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.202451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.202482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.202588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.202618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.202746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.202777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.202910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.202940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.203054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.203086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.203256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.203293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.203395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.203426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.203623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.203654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.203788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.203819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.204012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.204044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.204218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.204250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.204356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.204388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.204503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.204536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.204656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.204687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.204802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.204834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.204969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.205003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.205103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.205135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.785 [2024-11-19 11:38:59.205324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.785 [2024-11-19 11:38:59.205356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.785 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.205529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.205561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.205742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.205774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.205907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.205939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.206087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.206120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.206223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.206255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.206431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.206461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.206645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.206677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.206807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.206837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.206939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.206982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.207111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.207142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.207332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.207365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.207469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.207500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.207608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.207640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.207823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.207855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.207968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.208007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.208115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.208147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.208328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.208358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.208483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.208512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.208619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.208649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.208863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.208892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.209007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.209049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.209145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.209173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.209296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.209325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.209424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.209452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.209554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.209584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.209693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.209721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.209828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.209857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.209977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.210007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.210168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.210239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.210462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.210530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.210731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.210767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.210884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.210916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.211117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.211149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.211255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.211291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.211394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.211426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.211540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.211572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.211675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.211706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.211809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.211840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.212033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.212066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.212236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.212268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.212373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.786 [2024-11-19 11:38:59.212403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.786 qpair failed and we were unable to recover it. 00:27:45.786 [2024-11-19 11:38:59.212511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.212550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.212671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.212701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.212806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.212838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.212989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.213021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.213151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.213181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.213288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.213319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.213430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.213462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.213573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.213604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.213710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.213741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.213854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.213885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.214004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.214036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.214142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.214173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.214293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.214324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.214446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.214476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.214602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.214635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.214817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.214848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.215017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.215049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.215183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.215214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.215338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.215371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.215471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.215501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.215624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.215656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.215900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.215936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.216048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.216076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.216183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.216213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.216311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.216339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.216466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.216496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.216611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.216640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.216781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.216823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.216964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.216999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.217112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.217147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.217274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.217307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.217410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.217442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.217560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.217592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.217708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.217740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.217911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.217942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.218147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.218179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.218306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.218338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.218529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.218561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.218663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.218694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.218809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.218842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.218967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.787 [2024-11-19 11:38:59.219009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.787 qpair failed and we were unable to recover it. 00:27:45.787 [2024-11-19 11:38:59.219182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.219214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.219396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.219428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.219612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.219644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.219821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.219853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.219969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.220003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.220125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.220157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.220270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.220302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.220430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.220463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.220570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.220602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.220743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.220776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.220888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.220920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.221077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.221148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.221377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.221414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.221553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.221587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.221712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.221744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.221883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.221915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.222067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.222101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.222279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.222311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.222488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.222519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.222632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.222665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.222774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.222805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.222976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.223010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.223135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.223167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.223362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.223394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.223511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.223543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.223661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.223691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.223814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.223850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.223963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.224000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.224173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.224204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.224378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.224409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.224512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.224545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.224665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.224697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.224816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.224846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.225030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.788 [2024-11-19 11:38:59.225063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.788 qpair failed and we were unable to recover it. 00:27:45.788 [2024-11-19 11:38:59.225239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.225272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.225446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.225477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.225670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.225703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.225802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.225833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.225940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.225997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.226105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.226137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.226252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.226284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.226463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.226494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.226616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.226648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.226774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.226805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.226995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.227030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.227132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.227167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.227286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.227316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.227498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.227530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.227787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.227819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.228014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.228046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.228148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.228179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.228381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.228412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.228599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.228631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.228815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.228852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.228981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.229014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.229197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.229229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.229413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.229444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.229621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.229654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.229897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.229928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.230126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.230160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.230329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.230361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.230496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.230528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.230700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.230732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.230901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.230933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.231076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.231108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.231212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.231243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.231433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.231464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.231659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.231691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.231944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.231987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.232223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.232254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.232380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.232411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.232544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.232574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.232684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.232715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.232888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.232920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.233058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.233096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.789 qpair failed and we were unable to recover it. 00:27:45.789 [2024-11-19 11:38:59.233200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.789 [2024-11-19 11:38:59.233233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.233339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.233370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.233550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.233582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.233758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.233790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.233966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.234000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.234184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.234221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.234397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.234429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.234561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.234592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.234766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.234798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.234998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.235033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.235218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.235250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.235446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.235478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.235659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.235691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.235801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.235832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.236006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.236039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.236249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.236281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.236472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.236504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.236698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.236729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.236838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.236872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.236998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.237031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.237202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.237234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.237421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.237453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.237555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.237587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.237719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.237750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.237933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.237977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.238178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.238210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.238331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.238363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.238482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.238513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.238700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.238731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.238906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.238938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.239123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.239155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.239328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.239360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.239542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.239581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.239833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.239905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.240052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.240090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.240276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.240308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.240570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.240602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.240720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.240752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.240963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.240997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.241108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.241140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.241325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.241355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.790 qpair failed and we were unable to recover it. 00:27:45.790 [2024-11-19 11:38:59.241551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.790 [2024-11-19 11:38:59.241584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.241791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.241822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.241929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.241973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.242107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.242140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.242331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.242372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.242638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.242670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.242787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.242820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.242939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.242982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.243191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.243223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.243331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.243362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.243477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.243510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.243633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.243665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.243789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.243820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.244105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.244139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.244346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.244377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.244514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.244546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.244656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.244689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.244798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.244830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.245033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.245066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.245262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.245294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.245530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.245562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.245752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.245783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.246038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.246071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.246252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.246286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.246484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.246515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.246759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.246790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.246984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.247017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.247224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.247256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.247457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.247489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.247628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.247661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.247854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.247886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.248120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.248192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.248395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.248430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.248618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.248651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.248780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.248812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.249016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.249048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.249170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.249202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.249397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.249429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.249567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.249599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.249810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.249842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.249973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.250007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.791 qpair failed and we were unable to recover it. 00:27:45.791 [2024-11-19 11:38:59.250189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.791 [2024-11-19 11:38:59.250220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.250425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.250457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.250660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.250692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.250826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.250857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.251071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.251104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.251342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.251375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.251550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.251582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.251705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.251737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.251931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.251974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.252105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.252138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.252336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.252368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.252542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.252574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.252810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.252842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.252967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.252999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.253201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.253233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.253399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.253431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.253673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.253705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.253912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.253960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.254137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.254170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.254282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.254313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.254496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.254528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.254723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.254755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.254899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.254932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.255186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.255219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.255392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.255424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.255543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.255574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.255746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.255779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.255879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.255910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.256105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.256138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.256343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.256375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.256583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.256616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.256796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.256828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.256959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.256993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.257182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.257214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.257391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.257422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.257631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.257662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.257838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.792 [2024-11-19 11:38:59.257870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.792 qpair failed and we were unable to recover it. 00:27:45.792 [2024-11-19 11:38:59.258011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.258045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.258239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.258272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.258458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.258490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.258672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.258704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.258906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.258938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.259148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.259181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.259304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.259334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.259522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.259555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.259751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.259783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.259982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.260016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.260203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.260235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.260422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.260453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.260690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.260723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.260903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.260934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.261135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.261167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.261334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.261366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.261495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.261528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.261703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.261734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.261929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.261995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.262283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.262314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.262522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.262555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.262681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.262717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.262836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.262868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.263128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.263162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.263348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.263380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.263549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.263581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.263713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.263745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.263929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.263970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.264188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.264221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.264460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.264491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.264693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.264726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.264909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.264940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.265132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.265163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.265422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.265455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.265589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.265621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.265805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.265836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.266029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.266063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.266194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.266225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.266420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.266451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.266574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.266605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.266864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.266896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.793 qpair failed and we were unable to recover it. 00:27:45.793 [2024-11-19 11:38:59.267024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.793 [2024-11-19 11:38:59.267056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.267177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.267208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.267326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.267357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.267637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.267667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.267788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.267819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.268000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.268033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.268219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.268251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.268350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.268389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.268671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.268703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.268840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.268871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.269046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.269079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.269348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.269380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.269572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.269603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.269791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.269822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.270013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.270046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.270312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.270344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.270478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.270509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.270710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.270742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.270911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.270943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.271081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.271112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.271350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.271380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.271503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.271535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.271658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.271690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.271812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.271844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.272052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.272085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.272269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.272301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.272420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.272451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.272628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.272660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.272863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.272894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.273083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.273116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.273306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.273337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.273471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.273503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.273605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.273636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.273755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.273786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.273966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.274000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.274285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.274318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.274439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.274470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.274661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.274693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.274883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.274915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.275095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.275128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.275411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.275442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.275638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.794 [2024-11-19 11:38:59.275670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.794 qpair failed and we were unable to recover it. 00:27:45.794 [2024-11-19 11:38:59.275882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.275912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.276242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.276276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.276471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.276503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.276620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.276651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.276889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.276920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.277119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.277152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.277328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.277365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.277481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.277514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.277687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.277719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.277909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.277940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.278206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.278236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.278491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.278524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.278785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.278816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.279000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.279035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.279214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.279247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.279416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.279449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.279582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.279614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.279875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.279908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.280160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.280193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.280322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.280355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.280554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.280585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.280774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.280806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.281059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.281092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.281353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.281386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.281627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.281658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.281834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.281866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.281991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.282025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.282169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.282202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.282379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.282410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.282695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.282728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.282918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.282969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.283152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.283185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.283295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.283327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.283499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.283536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.283708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.283741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.283941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.283986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.284209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.284241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.284415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.284446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.284658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.284691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.284856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.284888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.285101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.285136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.795 [2024-11-19 11:38:59.285363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.795 [2024-11-19 11:38:59.285395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.795 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.285511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.285544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.285783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.285814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.285962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.285995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.286191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.286222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.286361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.286394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.286578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.286610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.286792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.286824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.286996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.287030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.287324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.287356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.287544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.287576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.287842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.287875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.288064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.288096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.288299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.288331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.288575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.288607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.288864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.288895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.289119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.289153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.289322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.289354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.289614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.289646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.289830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.289860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.290183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.290216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.290421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.290453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.290629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.290662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.290940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.291004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.291188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.291219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.291423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.291455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.291691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.291722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.291904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.291937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.292083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.292116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.292296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.292329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.292573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.292605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.292708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.292741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.292998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.293032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.293161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.293199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.293313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.293345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.293543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.293576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.293748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.293780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.293973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.294007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.796 [2024-11-19 11:38:59.294138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.796 [2024-11-19 11:38:59.294171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.796 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.294406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.294439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.294607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.294639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.294743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.294775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.294955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.294987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.295124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.295156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.295265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.295297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.295472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.295505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.295757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.295788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.296061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.296094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.296205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.296236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.296425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.296458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.296672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.296704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.296893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.296925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.297133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.297166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.297430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.297463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.297697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.297729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.297898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.297931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.298230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.298263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.298430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.298463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.298702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.298734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.298906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.298938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.299149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.299189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.299381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.299413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.299652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.299684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.299896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.299928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.300145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.300177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.300355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.300387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.300578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.300609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.300744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.300776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.301061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.301094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.301271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.301304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.301421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.301452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.301691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.301723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.301928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.301970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.302097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.302129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.302417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.302488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.302776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.302811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.302962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.302997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.303278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.303312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.303492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.303524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.797 qpair failed and we were unable to recover it. 00:27:45.797 [2024-11-19 11:38:59.303664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.797 [2024-11-19 11:38:59.303696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.303967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.304002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.304188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.304220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.304344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.304376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.304508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.304540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.304747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.304779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.304962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.304996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.305181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.305212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.305415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.305463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.305636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.305668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.305846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.305878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.306067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.306101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.306290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.306322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.306524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.306556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.306813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.306844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.307081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.307114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.307352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.307386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.307567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.307598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.307798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.307830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.308069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.308104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.308291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.308323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.308454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.308487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.308734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.308769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.308970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.309004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.309133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.309165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.309348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.309380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.309596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.309628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.309760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.309792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.309975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.310009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.310136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.310168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.310384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.310416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.310655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.310688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.310810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.310841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.310970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.311002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.311211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.311243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.311572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.311643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.311909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.311945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.312146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.312180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.312401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.312433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.312603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.312635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.312769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.312801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.798 qpair failed and we were unable to recover it. 00:27:45.798 [2024-11-19 11:38:59.312974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.798 [2024-11-19 11:38:59.313009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.313195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.313226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.313409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.313442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.313681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.313713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.313897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.313930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.314062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.314094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.314228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.314260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.314465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.314507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.314748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.314780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.314970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.315004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.315119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.315150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.315434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.315466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.315648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.315683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.315874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.315906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.316150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.316183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.316293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.316326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.316470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.316502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.316678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.316710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.316892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.316925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.317065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.317097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.317286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.317318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.317510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.317541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.317660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.317692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.317871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.317901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.318152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.318186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.318358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.318388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.318597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.318630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.318819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.318851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.319034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.319068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.319175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.319205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.319328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.319359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.319533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.319565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.319743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.319775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.320031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.320064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.320263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.320296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.320421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.320452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.320635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.320665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.320929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.320972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.321156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.321187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.321359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.321390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.321561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.321593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.799 qpair failed and we were unable to recover it. 00:27:45.799 [2024-11-19 11:38:59.321781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.799 [2024-11-19 11:38:59.321812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.322074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.322107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.322286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.322318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.322515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.322545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.322809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.322841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.323030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.323062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.323247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.323279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.323550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.323583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.323688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.323720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.323905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.323936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.324122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.324153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.324339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.324370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.324542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.324574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.324756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.324787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.324961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.324995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.325127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.325158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.325282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.325313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.325560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.325593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.325881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.325913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.326123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.326160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.326292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.326324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.326542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.326574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.326837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.326869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.327041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.327075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.327263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.327294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.327531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.327561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.327757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.327790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.327973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.328005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.328108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.328141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.328319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.328352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.328528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.328559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.328747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.328780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.329056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.329090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.329344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.329381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.329589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.329621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.329748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.329779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.329965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.329998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.800 [2024-11-19 11:38:59.330210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.800 [2024-11-19 11:38:59.330241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.800 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.330431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.330464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.330731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.330762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.330963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.330997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.331116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.331149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.331271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.331301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.331425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.331456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.331637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.331669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.331907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.331938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.332157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.332190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.332382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.332415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.332655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.332687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.332969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.333004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.333194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.333226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.333412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.333445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.333624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.333655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.333778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.333809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.334049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.334083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.334320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.334351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.334480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.334513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.334686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.334717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.334891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.334922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.335208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.335241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.335511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.335542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.335731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.335762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.335892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.335922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.336115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.336146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.336262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.336294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.336417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.336448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.336638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.336669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.336877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.336909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.337130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.337163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.337303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.337334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.337506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.337537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.337652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.337684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.337873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.337905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.338115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.338155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.338340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.338372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.338486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.338518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.338692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.338725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.338927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.801 [2024-11-19 11:38:59.338969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.801 qpair failed and we were unable to recover it. 00:27:45.801 [2024-11-19 11:38:59.339245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.339276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.339462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.339494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.339598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.339628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.339896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.339928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.340175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.340208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.340342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.340374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.340511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.340542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.340736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.340767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.341033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.341067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.341195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.341227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.341397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.341429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.341612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.341644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.341860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.341892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.342102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.342136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.342327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.342359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.342538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.342570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.342849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.342881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.343063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.343095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.343267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.343298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.343551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.343583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.343712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.343743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.343876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.343907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.344186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.344219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.344418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.344449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.344564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.344595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.344856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.344888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.345078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.345110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.345303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.345335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.345473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.345504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.345774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.345806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.346021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.346056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.346337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.346369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.346494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.346526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.346657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.346688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.346814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.346847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.346967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.347005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.347279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.347312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.347556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.347588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.347703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.347734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.347971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.348005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.348134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.802 [2024-11-19 11:38:59.348165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.802 qpair failed and we were unable to recover it. 00:27:45.802 [2024-11-19 11:38:59.348300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.348332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.348601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.348633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.348802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.348833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.349022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.349054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.349292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.349322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.349453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.349485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.349593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.349624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.349746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.349778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.350065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.350099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.350212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.350244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.350442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.350474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.350597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.350627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.350731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.350761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.351002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.351036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.351209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.351240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.351500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.351533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.351666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.351697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.351879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.351911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.352103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.352135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.352316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.352349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.352480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.352513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.352718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.352750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.352958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.352991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.353161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.353192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.353317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.353349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.353522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.353555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.353671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.353701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.353824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.353856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.353989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.354022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.354130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.354160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.354277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.354309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.354537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.354569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.354682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.354715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.354847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.354879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.355074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.355114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.355373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.355405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.355512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.355543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.355668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.355698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.355886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.355918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.356097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.356167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.356303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.803 [2024-11-19 11:38:59.356338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.803 qpair failed and we were unable to recover it. 00:27:45.803 [2024-11-19 11:38:59.356466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.356497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.356729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.356760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.356956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.356988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.357163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.357195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.357376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.357407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.357594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.357625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.357889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.357921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.358129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.358162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.358351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.358383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.358644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.358675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.358805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.358837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.359074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.359107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.359234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.359265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.359380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.359410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.359597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.359626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.359906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.359938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.360120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.360150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.360445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.360476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.360608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.360637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.360876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.360906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.361110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.361142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.361329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.361359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.361607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.361639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.361743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.361773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.362032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.362065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.362262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.362293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.362533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.362564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.362750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.362780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.363041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.363075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.363308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.363339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.363583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.363614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.363880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.363912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.364103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.364135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.364379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.364416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.364649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.364681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.364939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.364980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.365164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.365195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.365403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.365435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.365619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.365650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.365920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.365968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.366162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.804 [2024-11-19 11:38:59.366194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.804 qpair failed and we were unable to recover it. 00:27:45.804 [2024-11-19 11:38:59.366430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.366461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.366741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.366773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.366985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.367018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.367152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.367183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.367301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.367332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.367587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.367617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.367809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.367841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.368083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.368116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.368235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.368265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.368453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.368485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.368677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.368707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.368820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.368851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.369130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.369163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.369399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.369430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.369620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.369651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.369823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.369853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.369974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.370006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.370190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.370220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.370337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.370368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.370493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.370522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.370704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.370735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.370926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.370971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.371235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.371266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.371384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.371415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.371649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.371681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.371861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.371892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.372091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.372124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.372365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.372396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.372579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.372610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.372799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.372829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.372999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.373031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.373201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.373231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.373418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.373456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.373630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.373661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.373799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.373830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.373965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.374014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.374251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.805 [2024-11-19 11:38:59.374284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.805 qpair failed and we were unable to recover it. 00:27:45.805 [2024-11-19 11:38:59.374482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.374514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.374748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.374779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.374960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.374993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.375115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.375146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.375325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.375355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.375536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.375566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.375824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.375855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.375971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.376004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.376261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.376291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.376535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.376567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.376694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.376725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.376857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.376888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.377088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.377121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.377356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.377388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.377566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.377597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.377717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.377748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.377866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.377895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.378099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.378133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.378339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.378369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.378556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.378588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.378826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.378856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.379096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.379130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.379332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.379363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.379551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.379582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.379816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.379847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.380034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.380068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.380192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.380221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.380344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.380377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.380496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.380525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.380710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.380741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.380981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.381013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.381270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.381301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.381419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.381450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.381635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.381666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.381918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.381959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.382161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.382199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.382462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.382493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.382667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.382699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.382973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.383005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.383178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.383210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.383400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.806 [2024-11-19 11:38:59.383431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.806 qpair failed and we were unable to recover it. 00:27:45.806 [2024-11-19 11:38:59.383668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.383700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.383821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.383852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.383975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.384007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.384175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.384205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.384331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.384361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.384598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.384629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.384889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.384920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.385050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.385081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.385201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.385232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.385368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.385399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.385576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.385608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.385859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.385890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.386118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.386151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.386353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.386385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.386574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.386606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.386726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.386756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.387020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.387053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.387319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.387350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.387522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.387552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.387687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.387717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.387980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.388013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.388146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.388178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.388353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.388384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.388620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.388651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.388828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.388859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.389110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.389142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.389263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.389294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.389473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.389505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.389686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.389717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.389897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.389929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.390042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.390073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.390338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.390369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.390485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.390517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.390628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.390658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.390829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.390870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.391002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.391036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.391216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.391248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.391418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.391447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.391685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.391716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.391978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.392010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.392247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.807 [2024-11-19 11:38:59.392278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.807 qpair failed and we were unable to recover it. 00:27:45.807 [2024-11-19 11:38:59.392455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.392486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.392690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.392721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.392988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.393019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.393216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.393247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.393422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.393452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.393622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.393653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.393820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.393852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.394031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.394063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.394277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.394307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.394513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.394544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.394681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.394711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.394973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.395006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.395190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.395220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.395403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.395434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.395601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.395632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.395815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.395845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.396030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.396063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.396244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.396274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.396448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.396479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.396667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.396698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.396877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.396909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.397048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.397079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.397315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.397346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.397462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.397492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.397706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.397738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.397918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.397975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.398215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.398246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.398432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.398464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.398599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.398629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.398885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.398917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.399166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.399197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.399331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.399360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.399598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.399629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.399807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.399844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.399987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.400020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.400193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.400223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.400491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.400522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.400639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.400670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.400801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.400832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.401018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.808 [2024-11-19 11:38:59.401051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.808 qpair failed and we were unable to recover it. 00:27:45.808 [2024-11-19 11:38:59.401331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.401362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.401576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.401607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.401850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.401881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.401983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.402013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.402258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.402289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.402404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.402435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.402570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.402601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.402812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.402844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.403039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.403071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.403196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.403226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.403339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.403371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.403562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.403591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.403773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.403803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.404052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.404083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.404322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.404353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.404564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.404595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.404832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.404862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.404979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.405010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.405199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.405230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.405370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.405400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.405592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.405623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.405798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.405830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.406018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.406052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.406265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.406296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.406423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.406456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.406639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.406668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.406778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.406808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.407001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.407033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.407156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.407186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.407380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.407410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.407546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.407576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.407814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.407845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.408084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.408117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.408289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.408325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.408497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.408529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.408792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.408823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.409086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.809 [2024-11-19 11:38:59.409118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.809 qpair failed and we were unable to recover it. 00:27:45.809 [2024-11-19 11:38:59.409252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.409283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.409474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.409504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.409751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.409782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.409969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.410000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.410190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.410220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.410325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.410356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.410483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.410513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.410644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.410674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.410848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.410878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.411050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.411081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.411214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.411245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.411417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.411449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.411708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.411739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.412003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.412036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.412149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.412180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.412302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.412332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.412537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.412568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.412803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.412835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.413040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.413073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.413262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.413293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.413421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.413451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.413719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.413750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.413920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.413980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.414197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.414229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.414344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.414374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.414560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.414592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.414844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.414875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.415046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.415079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.415213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.415242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.415378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.415408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.415575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.415605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.415843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.415874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.416075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.416107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.416278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.416308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.416422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.416452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.416635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.416667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.416853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.416889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.417072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.417106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.417319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.417350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.417523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.417553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.810 [2024-11-19 11:38:59.417788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.810 [2024-11-19 11:38:59.417818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.810 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.418005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.418037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.418297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.418328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.418520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.418551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.418756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.418787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.418973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.419005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.419191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.419221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.419334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.419364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.419601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.419631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.419832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.419862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.420124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.420156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.420348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.420379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.420585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.420616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.420873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.420904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.421196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.421229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.421433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.421464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.421652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.421682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.421982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.422016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.422257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.422288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.422525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.422556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.422737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.422768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.423006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.423039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.423240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.423272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.423542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.423573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.423784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.423815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.424060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.424093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.424337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.424369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.424606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.424637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.424759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.424790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.424965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.424999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.425286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.425317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.425518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.425549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.425675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.425706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.425876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.425907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.426104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.426137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.426328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.426359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.426545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.426581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.426761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.426792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.426990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.427022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.427256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.427286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.427468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.427498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.811 [2024-11-19 11:38:59.427666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.811 [2024-11-19 11:38:59.427696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.811 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.427906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.427937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.428209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.428242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.428362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.428392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.428610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.428641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.428878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.428910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.429156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.429189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.429365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.429396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.429671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.429702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.429882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.429914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.430079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.430114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.430261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.430291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.430470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.430501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.430705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.430737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.430863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.430894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.431041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.431073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.431185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.431216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.431326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.431356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.431490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.431521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.431782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.431813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.432003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.432036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.432210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.432241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.432499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.432531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.432766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.432797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.433074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.433106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.433367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.433397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.433514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.433545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.433717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.433747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.434025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.434057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.434196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.434225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.434405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.434435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.434606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.434637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.434851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.434883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.434993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.435025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.435201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.435232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.435371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.435402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.435551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.435581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.435819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.435850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.436111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.436145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.436315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.436347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.436556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.436588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.812 [2024-11-19 11:38:59.436858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.812 [2024-11-19 11:38:59.436890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.812 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.437011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.437045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.437332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.437362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.437617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.437648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.437853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.437883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.438098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.438130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.438320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.438352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.438532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.438561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.438742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.438773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.438964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.438997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.439105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.439136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.439322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.439352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.439479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.439508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.439678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.439710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.439814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.439844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.439967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.439999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.440190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.440222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.440410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.440440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.440677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.440708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.440826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.440856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.440974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.441006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.441215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.441252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.441424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.441455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.441622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.441652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.441840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.441871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.442084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.442116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.442298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.442329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.442432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.442464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.442595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.442625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.442838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.442869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.443065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.443098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.443217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.443249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.443351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.443381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.443569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.443600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.443792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.443824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.443963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.443995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.444182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.444214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.813 qpair failed and we were unable to recover it. 00:27:45.813 [2024-11-19 11:38:59.444438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.813 [2024-11-19 11:38:59.444468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.444652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.444682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.444812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.444843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.445027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.445060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.445235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.445266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.445461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.445491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.445666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.445696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.445886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.445917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.446113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.446146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.446336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.446368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.446544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.446574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.446814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.446846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.447017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.447049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.447246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.447276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.447465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.447498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.447625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.447656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.447844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.447875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.448086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.448117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.448291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.448321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.448559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.448591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.448762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.448793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.448972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.449006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.449247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.449278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.449458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.449489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.449623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.449659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.449842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.449871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.450043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.450074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.450205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.450236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.450412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.450444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.450617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.450647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.450767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.450798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.450993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.451026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.451162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.451192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.451313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.451344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.451562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.451593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.451735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.451765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.451991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.452024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.452151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.452181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.452318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.452350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.452455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.452485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.452681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.452712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.814 qpair failed and we were unable to recover it. 00:27:45.814 [2024-11-19 11:38:59.452880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.814 [2024-11-19 11:38:59.452912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.453054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.453086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.453253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.453283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.453528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.453558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.453677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.453707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.453958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.453991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.454228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.454258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.454520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.454551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.454671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.454702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.454883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.454913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.455129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.455161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.455374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.455406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.455587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.455616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.455854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.455885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.456116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.456148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.456339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.456370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.456601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.456631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.456823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.456854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.457099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.457131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.457234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.457265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.457386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.457416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.457699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.457730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.457910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.457940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.458139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.458176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.458309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.458339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.458523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.458556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.458724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.458755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.458990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.459024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.459232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.459262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.459447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.459477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.459614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.459643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.459758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.459790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.460026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.460058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.460196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.460226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.460406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.460438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.460674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.460705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.460837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.460868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.461041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.461073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.461349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.461380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.461566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.461598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.461772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.461803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.815 qpair failed and we were unable to recover it. 00:27:45.815 [2024-11-19 11:38:59.462086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.815 [2024-11-19 11:38:59.462119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.462309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.462340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.462444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.462475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.462665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.462695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.462933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.462991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.463252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.463283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.463543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.463574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.463762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.463792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.463975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.464007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.464258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.464290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.464575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.464606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.464816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.464847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.465030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.465062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.465326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.465357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.465617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.465649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.465759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.465789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.466049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.466081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.466263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.466293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.466480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.466511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.466796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.466827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.466983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.467015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.467267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.467299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.467502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.467539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.467729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.467760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.467943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.467984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.468224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.468255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.468371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.468402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.468643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.468674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.468860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.468891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.469027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.469060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.469297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.469327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.469444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.469475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.469646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.469677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.469867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.469897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.470017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.470050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.470164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.470195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.470302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.470332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.470591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.470622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.470744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.470775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.470908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.470938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.471161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.471192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.816 qpair failed and we were unable to recover it. 00:27:45.816 [2024-11-19 11:38:59.471404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.816 [2024-11-19 11:38:59.471436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.471674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.471705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.471895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.471926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.472063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.472095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.472353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.472383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.472576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.472608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.472783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.472814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.473001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.473034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.473209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.473241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.473353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.473385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.473620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.473650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.473853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.473885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.474102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.474134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.474316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.474347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.474529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.474559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.474687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.474718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.474845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.474876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.475069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.475102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.475301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.475331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.475529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.475561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.475820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.475850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.476053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.476091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.476332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.476363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.476481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.476512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.476752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.476783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.476898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.476929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.477194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.477226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.477411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.477442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.477639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.477668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.477783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.477814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.478077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.478111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.478234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.478266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.478386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.478417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.478606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.478637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.478827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.478858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.479138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.479170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.817 [2024-11-19 11:38:59.479285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.817 [2024-11-19 11:38:59.479316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.817 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.479442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.479472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.479587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.479618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.479791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.479822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.480068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.480099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.480222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.480251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.480421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.480451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.480711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.480743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.480980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.481011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.481281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.481312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.481502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.481533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.481743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.481775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.481891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.481923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.482123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.482154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.482343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.482373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.482486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.482516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.482710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.482741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.482982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.483015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.483252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.483283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.483414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.483445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.483616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.483645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.483819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.483849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.484087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.484118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.484301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.484332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.484605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.484637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.484818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.484854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.485036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.485069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.485276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.485308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.485514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.485544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.485669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.485699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.485807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.485838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.486013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.486045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.486308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.486339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.486535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.486566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.486804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.486836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.487073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.487105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.487274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.487304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.487572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.487603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.487725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.487756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.488025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.488058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.488253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.818 [2024-11-19 11:38:59.488284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.818 qpair failed and we were unable to recover it. 00:27:45.818 [2024-11-19 11:38:59.488548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.488580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.488699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.488729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.488913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.488945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.489215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.489247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.489489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.489520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.489723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.489755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.489942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.489980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.490169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.490198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.490436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.490467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.490746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.490777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.491020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.491054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.491260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.491291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.491418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.491447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.491621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.491651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.491891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.491922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.492176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.492208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.492519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.492550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.492667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.492698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.492961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.492993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.493101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.493131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.493265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.493296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.493534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.493565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.493817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.493848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.493989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.494022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.494258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.494295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.494475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.494506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.494692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.494722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.494827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.494859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.495034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.495065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.495248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.495278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.495541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.495572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.495709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.495739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.495839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.495871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.496086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.496119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.496244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.496273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.496529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.496561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.496776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.496806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.497105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.497137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.497410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.497442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.497560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.497591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.497829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.819 [2024-11-19 11:38:59.497860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.819 qpair failed and we were unable to recover it. 00:27:45.819 [2024-11-19 11:38:59.498036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.498069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.498243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.498273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.498389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.498419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.498539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.498571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.498760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.498790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.498911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.498942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.499142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.499174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.499442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.499473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.499612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.499643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.499850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.499880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.500134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.500167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.500344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.500375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.500501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.500531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.500659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.500689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.500928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.500967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.501206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.501237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.501339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.501369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.501494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.501525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.501701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.501733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.501979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.502011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.502202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.502234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.502354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.502384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.502521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.502550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.502716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.502752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.502875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.502903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.503108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.503139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.503342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.503371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.503504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.503536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.503716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.503746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.503873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.503903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.504034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.504065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.504247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.504280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.504397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.504425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.504690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.504720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.504859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.504890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.505023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.505056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.505260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.505290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.505492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.505522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.505707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.505737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.505862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.505893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.506018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.820 [2024-11-19 11:38:59.506051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.820 qpair failed and we were unable to recover it. 00:27:45.820 [2024-11-19 11:38:59.506235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.506265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.506523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.506553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.506734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.506765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.506993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.507025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.507264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.507295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.507481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.507512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.507626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.507658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.507831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.507863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.508103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.508136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.508321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.508351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.508616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.508648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.508764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.508794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.509056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.509089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.509227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.509257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.509443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.509474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.509651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.509681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.509928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.509967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.510234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.510265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.510445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.510475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.510611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.510640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.510842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.510873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.510973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.511005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.511248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.511284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.511469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.511500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.511761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.511792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.512065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.512098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.512287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.512317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.512515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.512547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.512743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.512772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.512911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.512942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.513218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.513249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.513417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.513448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.513564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.513594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.513777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.513807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.514016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.514046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.514227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.514257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.514435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.514467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.514706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.514737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.514863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.514894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.515200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.515234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.515374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.821 [2024-11-19 11:38:59.515405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.821 qpair failed and we were unable to recover it. 00:27:45.821 [2024-11-19 11:38:59.515506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.515537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.515742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.515772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.516024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.516056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.516183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.516214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.516454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.516485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.516587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.516617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.516738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.516768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.516935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.516974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.517209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.517282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.517440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.517476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.517663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.517695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.517880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.517913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.518125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.518159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.518299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.518330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.518570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.518602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.518792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.518824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.519008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.519042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.519230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.519262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.519405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.519436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.519612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.519644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.519827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.519858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.520062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.520105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.520345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.520377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.520615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.520647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.520888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.520921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.521051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.521084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.521202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.521233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.521419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.521449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.521632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.521665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.521900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.521931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.522127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.522160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.522348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.522381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.522552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.522584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.522806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.522839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.522974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.523008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.523194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.523225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.523328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.822 [2024-11-19 11:38:59.523360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.822 qpair failed and we were unable to recover it. 00:27:45.822 [2024-11-19 11:38:59.523549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.523579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.523774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.523806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.523913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.523945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.524223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.524256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.524446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.524478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.524606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.524639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.524825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.524856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.525031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.525063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.525243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.525275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.525407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.525438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.525621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.525653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.525937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.525978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.526235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.526267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.526517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.526549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.526672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.526704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.526829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.526860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.527151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.527185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.527479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.527511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.527698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.527729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.527915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.527956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.528108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.528141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.528261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.528292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.528501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.528539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.528776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.528807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.528999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.529038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.529226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.529259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.529502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.529534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.529724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.529756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.529958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.529992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.530121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.530153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.530337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.530370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.530606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.530639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.530841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.530873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.531084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.531117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:45.823 [2024-11-19 11:38:59.531250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.823 [2024-11-19 11:38:59.531282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:45.823 qpair failed and we were unable to recover it. 00:27:46.103 [2024-11-19 11:38:59.531483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.103 [2024-11-19 11:38:59.531515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.103 qpair failed and we were unable to recover it. 00:27:46.103 [2024-11-19 11:38:59.531659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.103 [2024-11-19 11:38:59.531692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.103 qpair failed and we were unable to recover it. 00:27:46.103 [2024-11-19 11:38:59.531834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.103 [2024-11-19 11:38:59.531866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.103 qpair failed and we were unable to recover it. 00:27:46.103 [2024-11-19 11:38:59.532072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.103 [2024-11-19 11:38:59.532105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.103 qpair failed and we were unable to recover it. 00:27:46.103 [2024-11-19 11:38:59.532231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.103 [2024-11-19 11:38:59.532263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.103 qpair failed and we were unable to recover it. 00:27:46.103 [2024-11-19 11:38:59.532389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.103 [2024-11-19 11:38:59.532422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.103 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.532604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.532635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.532807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.532840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.533049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.533083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.533207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.533239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.533475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.533507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.533638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.533670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.533840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.533871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.534087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.534121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.534361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.534394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.534521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.534553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.534737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.534770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.534905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.534937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.535153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.535185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.535439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.535471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.535646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.535679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.535809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.535841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.536081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.536115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.536289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.536322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.536522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.536553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.536760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.536793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.536912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.536945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.537152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.537183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.537300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.537332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.537535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.537573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.537704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.537736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.537912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.537944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.538139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.538171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.538360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.538393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.538590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.538622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.538738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.538770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.538980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.539013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.539231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.539263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.539449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.539480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.539600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.539632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.539805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.539837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.539966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.104 [2024-11-19 11:38:59.539999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.104 qpair failed and we were unable to recover it. 00:27:46.104 [2024-11-19 11:38:59.540170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.540201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.540391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.540423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.540686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.540717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.540900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.540932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.541121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.541154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.541438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.541470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.541655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.541687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.541807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.541839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.542028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.542061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.542178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.542210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.542383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.542415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.542684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.542715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.542851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.542882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.543020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.543053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.543315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.543348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.543475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.543507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.543772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.543804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.543992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.544025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.544288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.544319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.544589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.544621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.544835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.544866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.544994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.545027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.545138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.545169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.545292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.545324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.545509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.545540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.545712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.545744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.545885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.545916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.546168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.546251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.546477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.546513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.546692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.546724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.546941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.546989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.547250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.547282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.547521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.547553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.547811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.547843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.548106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.548139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.548384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.548417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.548631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.548664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.105 [2024-11-19 11:38:59.548847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.105 [2024-11-19 11:38:59.548879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.105 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.549090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.549125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.549380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.549412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.549545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.549577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.549778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.549810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.549999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.550031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.550231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.550263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.550405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.550437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.550607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.550639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.550898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.550930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.551122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.551156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.551395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.551426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.551668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.551701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.551814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.551845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.552029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.552062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.552302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.552333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.552580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.552612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.552789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.552821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.553037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.553072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.553285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.553317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.553506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.553538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.553796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.553828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.554070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.554103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.554239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.554270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.554450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.554481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.554657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.554689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.554814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.554845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.555143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.555175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.555353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.555384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.555575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.555608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.555741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.555773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.555881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.555918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.556042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.556075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.556191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.556222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.556339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.556370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.556578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.556609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.556785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.556817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.556926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.556969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.557181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.106 [2024-11-19 11:38:59.557213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.106 qpair failed and we were unable to recover it. 00:27:46.106 [2024-11-19 11:38:59.557405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.557437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.557698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.557730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.557969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.558003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.558121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.558153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.558398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.558431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.558633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.558665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.558846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.558880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.559001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.559034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.559232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.559264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.559381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.559413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.559651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.559682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.559870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.559902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.560086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.560118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.560358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.560391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.560651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.560684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.560811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.560842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.560972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.561005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.561196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.561227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.561422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.561454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.561556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.561587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.561882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.561913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.562100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.562133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.562277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.562309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.562434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.562465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.562706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.562738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.562924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.562972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.563101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.563133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.563303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.563334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.563526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.563558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.563747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.563778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.563969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.564003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.564125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.564158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.564437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.564469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.564603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.564636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.564764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.564795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.564983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.565016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.565245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.565277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.565458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.107 [2024-11-19 11:38:59.565489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.107 qpair failed and we were unable to recover it. 00:27:46.107 [2024-11-19 11:38:59.565751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.565784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.566034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.566067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.566257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.566289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.566412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.566444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.566576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.566608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.566738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.566770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.566892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.566924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.567213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.567246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.567359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.567392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.567588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.567620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.567809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.567840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.568032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.568065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.568309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.568341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.568600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.568632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.568815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.568847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.568973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.569006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.569138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.569169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.569430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.569462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.569566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.569599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.569739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.569770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.569964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.569998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.570171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.570203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.570308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.570345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.570546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.570578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.570786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.570817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.570956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.570989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.571162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.571194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.571456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.571488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.571625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.571656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.571827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.571859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.572129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.572162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.572403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.572434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.572604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.108 [2024-11-19 11:38:59.572636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.108 qpair failed and we were unable to recover it. 00:27:46.108 [2024-11-19 11:38:59.572877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.572909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.573160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.573193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.573473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.573504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.573641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.573672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.573861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.573892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.574166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.574199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.574373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.574404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.574591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.574622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.574805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.574837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.575010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.575044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.575167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.575198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.575319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.575350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.575463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.575495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.575756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.575788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.576026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.576058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.576242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.576274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.576534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.576565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.576754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.576787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.577057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.577090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.577267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.577299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.577492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.577524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.577640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.577671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.577926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.577967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.578203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.578235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.578423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.578455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.578589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.578620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.578808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.578840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.579015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.579048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.579167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.579198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.579445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.579477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.579651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.579690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.579873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.579904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.580048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.580081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.580208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.580240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.580432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.580463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.580647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.580679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.580851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.580882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.580996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.109 [2024-11-19 11:38:59.581030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.109 qpair failed and we were unable to recover it. 00:27:46.109 [2024-11-19 11:38:59.581163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.581195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.581411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.581443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.581618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.581649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.581781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.581813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.581926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.581969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.582229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.582261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.582401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.582434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.582538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.582570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.582779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.582810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.582934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.582977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.583167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.583198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.583440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.583474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.583665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.583696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.583904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.583936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.584066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.584098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.584289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.584320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.584560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.584591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.584723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.584754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.584923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.584967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.585273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.585310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.585516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.585547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.585748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.585780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.585968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.586001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.586185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.586217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.586428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.586460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.586631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.586662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.586787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.586819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.587004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.587037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.587219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.587251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.587357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.587388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.587509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.587541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.587708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.587740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.587913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.587945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.588135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.588167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.588338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.588369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.588551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.588583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.588713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.588744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.588933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.110 [2024-11-19 11:38:59.588996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.110 qpair failed and we were unable to recover it. 00:27:46.110 [2024-11-19 11:38:59.589186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.589218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.589469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.589501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.589704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.589736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.589929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.589972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.590187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.590217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.590394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.590426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.590551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.590582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.590700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.590733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.590970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.591004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.591133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.591164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.591334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.591366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.591555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.591586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.591753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.591784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.592022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.592054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.592227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.592258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.592429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.592461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.592591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.592622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.592908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.592939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.593231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.593263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.593511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.593542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.593758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.593790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.594035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.594067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.594194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.594231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.594419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.594452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.594564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.594595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.594832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.594863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.595102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.595135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.595254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.595286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.595472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.595504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.595692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.595723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.595895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.595927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.596069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.596101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.596279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.596311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.596491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.596523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.596651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.596681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.596873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.596905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.597092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.597125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.597316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.597348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.111 [2024-11-19 11:38:59.597589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.111 [2024-11-19 11:38:59.597621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.111 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.597858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.597890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.598111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.598144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.598434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.598466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.598645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.598677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.598861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.598892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.599016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.599049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.599168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.599199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.599461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.599492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.599622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.599654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.599863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.599895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.600086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.600126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.600317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.600349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.600537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.600569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.600745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.600776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.600968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.601002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.601242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.601274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.601448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.601479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.601676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.601708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.601888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.601919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.602220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.602253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.602375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.602406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.602584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.602616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.602801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.602831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.602961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.602995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.603264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.603297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.603555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.603586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.603829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.603861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.604051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.604084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.604269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.604300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.604504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.604535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.604710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.604742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.604920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.604960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.605224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.605255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.605385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.605417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.605587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.605617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.605788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.605820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.605989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.606022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.112 [2024-11-19 11:38:59.606283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.112 [2024-11-19 11:38:59.606315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.112 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.606450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.606482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.606615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.606647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.606816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.606847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.607028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.607061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.607186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.607218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.607476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.607507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.607679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.607709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.607825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.607856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.607975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.608008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.608179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.608211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.608390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.608422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.608540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.608571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.608681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.608713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.608832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.608870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.609038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.609070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.609329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.609361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.609483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.609515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.609619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.609651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.609784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.609816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.609935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.609987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.610113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.610146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.610319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.610349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.610587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.610619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.610827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.610860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.611042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.611075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.611260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.611291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.611396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.611427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.611607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.611639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.611778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.611809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.611935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.611979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.612148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.612180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.612416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.612447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.612566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.612599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.612771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.113 [2024-11-19 11:38:59.612803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.113 qpair failed and we were unable to recover it. 00:27:46.113 [2024-11-19 11:38:59.613020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.613053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.613176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.613207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.613392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.613424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.613602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.613635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.613832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.613863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.614053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.614086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.614337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.614375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.614614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.614646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.614816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.614847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.615036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.615070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.615347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.615378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.615621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.615652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.615849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.615881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.616127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.616160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.616419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.616450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.616568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.616600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.616775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.616807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.617017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.617050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.617189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.617221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.617404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.617435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.617563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.617595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.617792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.617823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.618009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.618042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.618302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.618333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.618562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.618594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.618798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.618830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.619069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.619102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.619210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.619241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.619431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.619462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.619643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.619675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.619805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.619837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.619970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.620003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.620241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.620273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.620455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.620487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.620668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.620699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.620900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.620932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.621134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.621168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.621345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.621376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.114 qpair failed and we were unable to recover it. 00:27:46.114 [2024-11-19 11:38:59.621493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.114 [2024-11-19 11:38:59.621525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.621730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.621761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.622019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.622053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.622267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.622299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.622486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.622517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.622755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.622787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.622915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.622955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.623130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.623161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.623267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.623299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.623420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.623456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.623571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.623602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.623723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.623754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.623974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.624007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.624180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.624213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.624409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.624440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.624654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.624686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.624876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.624907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.625129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.625163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.625333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.625364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.625558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.625590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.626007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.626044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.626172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.626314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.626512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.626544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.626791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.626822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.626937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.626982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.627243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.627274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.627464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.627496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.627679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.627711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.627893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.627924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.628059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.628092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.628298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.628330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.628519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.628550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.628837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.628870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.628989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.629023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.629145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.629178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.629314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.629346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.629532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.629565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.629812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.629844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.115 [2024-11-19 11:38:59.630008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.115 [2024-11-19 11:38:59.630043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.115 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.630255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.630287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.630475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.630507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.630689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.630721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.630908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.630941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.631195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.631228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.631401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.631433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.631543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.631574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.631753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.631785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.631990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.632023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.632311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.632343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.632592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.632623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.632874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.632908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.633181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.633216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.633355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.633388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.633566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.633599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.633774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.633806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.634068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.634101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.634213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.634245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.634382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.634413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.634602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.634634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.634889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.634920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.635140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.635172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.635340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.635372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.635573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.635604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.635777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.635808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.635983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.636017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.636192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.636224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.636345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.636377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.636579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.636610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.636793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.636825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.636960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.636993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.637165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.637196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.637406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.637438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.637613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.637644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.637848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.637880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.638066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.638122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.638262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.638294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.638395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.116 [2024-11-19 11:38:59.638426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.116 qpair failed and we were unable to recover it. 00:27:46.116 [2024-11-19 11:38:59.638607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.638645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.638835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.638866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.639071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.639104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.639224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.639257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.639372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.639404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.639520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.639551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.639653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.639684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.639872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.639904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.640117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.640150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.640326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.640358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.640544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.640575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.640694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.640725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.640914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.640946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.641079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.641111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.641298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.641331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.641435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.641466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.641637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.641668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.641797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.641829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.642067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.642100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.642301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.642333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.642450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.642482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.642660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.642691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.642902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.642934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.643131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.643163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.643331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.643362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.643537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.643568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.643801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.643833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.644015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.644048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.644287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.644319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.644451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.644483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.644606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.644637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.644827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.644859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.645122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.645156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.645339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.117 [2024-11-19 11:38:59.645371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.117 qpair failed and we were unable to recover it. 00:27:46.117 [2024-11-19 11:38:59.645487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.645519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.645756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.645788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.645984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.646017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.646145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.646178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.646443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.646474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.646665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.646698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.646870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.646901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.647094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.647132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.647402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.647433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.647577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.647608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.647814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.647845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.647965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.647998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.648116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.648148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.648263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.648294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.648533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.648565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.648771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.648802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.648992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.649025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.649229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.649261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.649393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.649426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.649662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.649693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.649861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.649893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.650153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.650187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.650358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.650389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.650655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.650687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.650879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.650910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.651105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.651138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.651338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.651369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.651552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.651584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.651774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.651805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.651994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.652028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.652239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.652272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.652446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.652477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.652686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.652718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.652823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.652854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.653067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.653106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.653280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.653311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.653583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.653614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.118 [2024-11-19 11:38:59.653750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.118 [2024-11-19 11:38:59.653781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.118 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.653908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.653940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.654075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.654106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.654284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.654316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.654444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.654476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.654607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.654639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.654836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.654867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.655104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.655139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.655252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.655284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.655519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.655550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.655787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.655819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.655996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.656030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.656290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.656322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.656523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.656555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.656736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.656767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.656977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.657010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.657197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.657228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.657493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.657525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.657713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.657745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.657872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.657904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.658102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.658135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.658403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.658436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.658605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.658637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.658771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.658802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.659093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.659126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.659341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.659374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.659559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.659590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.659706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.659738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.659941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.660001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.660222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.660254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.660463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.660495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.660621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.660653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.660916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.660958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.661086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.661118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.661400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.661432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.661630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.661661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.661863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.661894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.662026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.662058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.662192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.119 [2024-11-19 11:38:59.662230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.119 qpair failed and we were unable to recover it. 00:27:46.119 [2024-11-19 11:38:59.662492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.662523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.662709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.662741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.662977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.663010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.663218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.663250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.663483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.663515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.663729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.663762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.663966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.664000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.664266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.664298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.664485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.664516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.664733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.664765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.665001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.665033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.665157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.665188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.665371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.665402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.665616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.665648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.665851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.665883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.666013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.666045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.666229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.666260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.666390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.666421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.666541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.666572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.666810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.666842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.667047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.667080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.667224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.667257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.667445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.667476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.667603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.667635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.667823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.667856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.668040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.668073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.668200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.668238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.668478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.668510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.668701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.668733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.668868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.668900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.669122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.669156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.669424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.669456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.669637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.669670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.669909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.669940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.670233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.670265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.670446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.670476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.670659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.670691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.670871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.120 [2024-11-19 11:38:59.670901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.120 qpair failed and we were unable to recover it. 00:27:46.120 [2024-11-19 11:38:59.671036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.671068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.671266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.671298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.671483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.671516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.671634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.671666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.671850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.671880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.672014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.672048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.672243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.672275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.672505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.672537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.672775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.672807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.673010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.673044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.673167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.673199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.673390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.673421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.673604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.673636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.673921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.673960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.674105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.674136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.674255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.674287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.674481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.674512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.674755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.674788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.674919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.674958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.675128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.675160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.675341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.675372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.675561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.675593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.675782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.675813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.675984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.676017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.676257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.676288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.676525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.676557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.676765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.676797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.677033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.677066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.677235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.677266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.677442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.677485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.677620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.677651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.677911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.677943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.678214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.678246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.678496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.678528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.678645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.678676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.678847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.678878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.679067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.679100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.679327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.679359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.121 [2024-11-19 11:38:59.679543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.121 [2024-11-19 11:38:59.679574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.121 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.679773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.679804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.680044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.680077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.680289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.680320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.680568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.680599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.680816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.680848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.681029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.681062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.681300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.681333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.681454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.681485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.681731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.681763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.681964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.681997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.682172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.682204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.682444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.682475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.682642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.682674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.682792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.682823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.683078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.683111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.683297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.683328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.683532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.683564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.683681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.683719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.683908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.683940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.684144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.684176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.684366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.684398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.684662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.684694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.684963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.684996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.685241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.685274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.685399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.685431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.685651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.685682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.685857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.685889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.686146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.686179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.686310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.686341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.686510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.686541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.686648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.686680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.686943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.687022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.687287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.122 [2024-11-19 11:38:59.687325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.122 qpair failed and we were unable to recover it. 00:27:46.122 [2024-11-19 11:38:59.687594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.687627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.687885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.687918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.688059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.688092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.688215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.688248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.688421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.688453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.688643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.688675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.688863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.688895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.689093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.689127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.689234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.689265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.689399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.689431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.689636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.689668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.689848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.689890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.690086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.690119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.690238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.690270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.690450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.690481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.690665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.690697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.690900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.690933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.691119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.691151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.691274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.691306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.691557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.691589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.691775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.691806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.692057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.692091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.692208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.692240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.692353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.692384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.692625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.692657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.692845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.692877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.693115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.693147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.693270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.693302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.693544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.693576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.693755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.693788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.693982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.694016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.694142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.694174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.694373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.694404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.694593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.694625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.694753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.694785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.695049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.695082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.695271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.695303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.695440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.123 [2024-11-19 11:38:59.695471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.123 qpair failed and we were unable to recover it. 00:27:46.123 [2024-11-19 11:38:59.695661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.695693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.695885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.695917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.696049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.696082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.696259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.696291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.696415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.696447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.696682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.696713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.696886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.696917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.697112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.697145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.697410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.697442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.697552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.697583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.697722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.697753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.697876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.697908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.698109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.698141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.698314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.698351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.698474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.698506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.698676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.698708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.698881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.698913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.699051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.699084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.699270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.699302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.699484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.699515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.699693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.699725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.699899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.699931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.700202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.700234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.700420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.700452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.700580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.700611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.700827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.700859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.701114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.701148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.701275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.701307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.701481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.701513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.701635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.701666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.701785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.701817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.702057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.702090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.702283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.702314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.702422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.702454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.702636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.702667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.702794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.702825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.703002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.703035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.124 [2024-11-19 11:38:59.703235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.124 [2024-11-19 11:38:59.703266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.124 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.703383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.703415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.703601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.703633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.703820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.703852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.704078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.704111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.704295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.704327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.704609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.704641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.704834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.704866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.705000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.705034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.705151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.705181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.705358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.705390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.705504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.705536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.705775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.705806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.705922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.705963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.706204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.706236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.706405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.706436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.706568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.706606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.706727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.706758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.706890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.706921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.707166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.707198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.707386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.707419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.707685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.707716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.707967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.708000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.708239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.708271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.708538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.708570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.708675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.708707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.708987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.709020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.709283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.709315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.709583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.709615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.709757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.709788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.709914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.709946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.710146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.710177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.710445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.710477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.710661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.710693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.710883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.710915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.711028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.711060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.711299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.711331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.711505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.125 [2024-11-19 11:38:59.711537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.125 qpair failed and we were unable to recover it. 00:27:46.125 [2024-11-19 11:38:59.711716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.711748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.711993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.712026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.712214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.712246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.712364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.712396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.712526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.712558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.712734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.712766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.712960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.712992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.713227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.713259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.713516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.713548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.713808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.713839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.714105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.714139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.714275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.714307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.714490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.714522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.714763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.714795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.714978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.715011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.715190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.715221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.715358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.715390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.715571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.715603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.715734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.715766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.715893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.715925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.716046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.716078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.716322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.716353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.716478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.716510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.716709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.716741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.716933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.716973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.717182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.717213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.717397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.717429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.717715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.717746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.717986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.718020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.718223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.718255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.718437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.718468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.718665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.718696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.718943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.126 [2024-11-19 11:38:59.718984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.126 qpair failed and we were unable to recover it. 00:27:46.126 [2024-11-19 11:38:59.719178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.719209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.719471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.719503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.719697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.719728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.719918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.719988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.720174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.720206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.720434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.720467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.720596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.720628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.720868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.720901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.721101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.721134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.721341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.721373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.721488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.721519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.721755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.721786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.721963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.722002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.722197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.722229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.722470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.722502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.722681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.722712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.722833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.722864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.723095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.723128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.723434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.723465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.723600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.723631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.723868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.723899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.724095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.724128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.724357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.724389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.724571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.724602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.724803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.724834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.725053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.725086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.725299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.725331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.725521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.725552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.725813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.725844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.725957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.725989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.726174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.726206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.726445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.726477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.726727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.726758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.726991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.727024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.727145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.727177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.727437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.727468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.727652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.127 [2024-11-19 11:38:59.727684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.127 qpair failed and we were unable to recover it. 00:27:46.127 [2024-11-19 11:38:59.727802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.727834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.728044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.728076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.728322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.728353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.728493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.728524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.728657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.728688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.728928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.728968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.729074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.729106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.729315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.729347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.729472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.729504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.729695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.729728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.729923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.729962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.730203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.730235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.730418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.730449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.730674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.730706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.730877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.730908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.731090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.731128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.731319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.731352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.731538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.731569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.731689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.731721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.731845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.731877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.732070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.732103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.732212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.732244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.732415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.732447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.732552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.732583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.732761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.732793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.733058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.733091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.733282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.733314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.733496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.733527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.733777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.733809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.734004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.734038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.734223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.734254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.734436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.734467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.734656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.734687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.734872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.734904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.735116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.735148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.735337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.735369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.735539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.735570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.128 [2024-11-19 11:38:59.735695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.128 [2024-11-19 11:38:59.735726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.128 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.735918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.735981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.736174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.736206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.736392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.736423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.736639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.736671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.736867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.736899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.737138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.737172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.737292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.737323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.737559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.737590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.737733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.737764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.738007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.738041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.738215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.738246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.738446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.738478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.738743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.738775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.738966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.738999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.739251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.739282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.739464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.739496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.739616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.739647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.739887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.739924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.740118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.740151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.740362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.740393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.740637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.740669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.740875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.740907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.741132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.741165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.741351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.741383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.741555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.741587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.741788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.741820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.742091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.742125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.742405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.742437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.742716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.742748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.742963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.742996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.743233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.743265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.743459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.743492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.743671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.743702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.743885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.743916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.744138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.744170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.744287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.744318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.744551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.744582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.129 qpair failed and we were unable to recover it. 00:27:46.129 [2024-11-19 11:38:59.744846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.129 [2024-11-19 11:38:59.744878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.745115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.745149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.745420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.745451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.745748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.745780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.746010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.746044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.746307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.746339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.746562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.746594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.746866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.746899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.747124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.747157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.747393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.747425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.747675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.747708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.747880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.747911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.748134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.748166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.748423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.748455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.748713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.748744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.749030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.749064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.749332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.749364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.749583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.749615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.749826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.749858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.750106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.750140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.750331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.750369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.750637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.750669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.750957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.750991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.751281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.751313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.751495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.751526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.751715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.751748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.752010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.752042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.752224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.752256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.752427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.752459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.752747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.752778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.752966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.752999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.753184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.753216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.753505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.753536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.753727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.753759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.754029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.754063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.754263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.754294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.754547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.754578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.754767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.130 [2024-11-19 11:38:59.754799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.130 qpair failed and we were unable to recover it. 00:27:46.130 [2024-11-19 11:38:59.754936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.754977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.755242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.755273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.755553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.755584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.755888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.755919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.756181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.756214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.756422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.756454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.756695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.756727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.756909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.756942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.757151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.757183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.757322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.757354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.757686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.757717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.757919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.757974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.758187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.758220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.758417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.758449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.758717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.758748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.758971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.759004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.759131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.759163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.759335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.759366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.759549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.759580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.759821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.759853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.760095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.760128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.760362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.760394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.760645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.760682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.760873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.760905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.761112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.761145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.761338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.761370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.761556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.761587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.761776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.761808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.762046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.762080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.762279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.762311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.762553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.762585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.762786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.762817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.763011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.763044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-11-19 11:38:59.763167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.131 [2024-11-19 11:38:59.763200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.763321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.763353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.763604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.763636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.763929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.763970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.764235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.764266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.764458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.764489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.764755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.764787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.765002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.765035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.765153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.765184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.765369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.765400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.765681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.765713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.765961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.765993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.766194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.766226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.766510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.766541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.766814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.766845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.767085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.767119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.767315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.767346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.767525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.767556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.767834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.767865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.768059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.768092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.768355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.768387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.768573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.768603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.768869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.768901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.769086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.769119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.769312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.769343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.769514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.769546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.769737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.769769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.770017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.770050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.770246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.770278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.770547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.770584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.770767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.770799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.771056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.771090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.771292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.771324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.771578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.771609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.771862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.771894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.772010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.772042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.772320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.772351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.772565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.132 [2024-11-19 11:38:59.772597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.132 qpair failed and we were unable to recover it. 00:27:46.132 [2024-11-19 11:38:59.772837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.772869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.773062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.773095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.773316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.773348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.773548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.773579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.773828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.773859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.774124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.774157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.774447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.774479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.774766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.774797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.775001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.775035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.775291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.775323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.775514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.775545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.775836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.775868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.776060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.776094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.776335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.776367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.776655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.776687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.776964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.776996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.777138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.777170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.777408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.777439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.777711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.777743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.777864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.777895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.778169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.778202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.778473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.778505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.778647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.778679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.778943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.778987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.779194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.779226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.779466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.779497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.779768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.779800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.780038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.780071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.780279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.780311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.780576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.780608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.780859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.780890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.781148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.781187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.781376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.781408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.781676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.781708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.781997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.782031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.782242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.782273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.782448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.782480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.133 qpair failed and we were unable to recover it. 00:27:46.133 [2024-11-19 11:38:59.782719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.133 [2024-11-19 11:38:59.782750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.782993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.783026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.783240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.783273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.783454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.783486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.783676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.783708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.783977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.784011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.784201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.784233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.784493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.784525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.784714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.784746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.785010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.785043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.785292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.785323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.785602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.785633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.785901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.785932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.786225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.786258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.786478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.786509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.786703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.786735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.786956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.786989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.787231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.787262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.787459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.787491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.787763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.787795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.788080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.788130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.788390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.788422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.788705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.788736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.788943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.788984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.789227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.789258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.789449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.789481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.789672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.789703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.789967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.790000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.790294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.790326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.790591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.790622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.790839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.790870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.791140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.791173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.791455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.791486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.791686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.791717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.791974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.792013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.792298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.792330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.792599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.792631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.792905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.134 [2024-11-19 11:38:59.792936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.134 qpair failed and we were unable to recover it. 00:27:46.134 [2024-11-19 11:38:59.793142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.793175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.793406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.793437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.793577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.793609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.793781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.793812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.793986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.794020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.794238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.794270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.794536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.794567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.794829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.794861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.795157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.795190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.795455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.795486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.795780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.795811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.796081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.796115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.796321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.796353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.796604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.796636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.796886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.796917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.797216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.797248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.797527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.797558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.797827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.797858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.798152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.798185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.798457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.798489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.798730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.798761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.799028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.799061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.799233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.799264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.799540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.799573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.799845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.799876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.800138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.800171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.800383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.800415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.800631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.800664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.800907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.800938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.801204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.801237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.801528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.801559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.801833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.801864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.802078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.802111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.802376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.802408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.802650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.802681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.802956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.802988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.803259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.803298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.135 [2024-11-19 11:38:59.803488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.135 [2024-11-19 11:38:59.803519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.135 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.803769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.803801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.803991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.804025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.804161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.804192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.804478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.804510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.804696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.804728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.804998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.805031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.805223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.805256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.805464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.805496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.805763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.805795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.806043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.806077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.806338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.806369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.806553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.806585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.806854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.806887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.807137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.807170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.807413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.807445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.807578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.807610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.807798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.807830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.808094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.808128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.808372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.808404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.808615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.808646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.808917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.808955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.809197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.809228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.809469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.809500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.809779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.809811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.810054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.810088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.810349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.810382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.810592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.810624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.810877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.810908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.811233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.811267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.811533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.811565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.811849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.811881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.812164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.812198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.812443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.136 [2024-11-19 11:38:59.812475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.136 qpair failed and we were unable to recover it. 00:27:46.136 [2024-11-19 11:38:59.812763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.812794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.812991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.813025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.813267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.813300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.813472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.813504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.813794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.813826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.813983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.814023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.814293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.814326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.814582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.814614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.814810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.814842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.815061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.815094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.815286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.815318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.815576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.815607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.815783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.815814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.816072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.816104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.816293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.816326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.816458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.816490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.816756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.816789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.816985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.817018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.817274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.817306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.817494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.817526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.817799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.817831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.818106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.818140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.818341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.818373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.818639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.818688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.818936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.818978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.819269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.819300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.819570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.819602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.819811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.819843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.820019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.820052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.820295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.820328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.820600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.820632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.820924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.820973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.821176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.821209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.821474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.821505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.821769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.821801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.822016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.822049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.822179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.822211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.822485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.137 [2024-11-19 11:38:59.822517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.137 qpair failed and we were unable to recover it. 00:27:46.137 [2024-11-19 11:38:59.822828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.822860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.823145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.823178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.823397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.823430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.823684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.823716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.823965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.823998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.824215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.824248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.824424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.824455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.824698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.824736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.824861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.824893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.825078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.825111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.825287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.825319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.825496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.825528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.825822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.825853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.826116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.826149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.826374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.826406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.826673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.826704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.826999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.827033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.827297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.827329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.827604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.827636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.827853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.827884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.828076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.828109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.828435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.828468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.828733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.828765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.828967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.829001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.829259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.829291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.829496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.829528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.829722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.829754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.830018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.830052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.830250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.830282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.830536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.830567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.830813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.830845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.831042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.831076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.831260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.831292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.831487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.831518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.831760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.831834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.832126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.832165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.832315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.832349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.138 [2024-11-19 11:38:59.832597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.138 [2024-11-19 11:38:59.832629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.138 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.832815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.832847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.833071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.833105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.833377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.833408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.833623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.833656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.833903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.833934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.834242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.834276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.834553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.834584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.834854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.834886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.835180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.835214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.835349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.835391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.835506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.835538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.835754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.835787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.836058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.836092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.836309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.836341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.836521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.836553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.836823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.836855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.837068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.837101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.837379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.837413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.837612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.837644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.837900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.837933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.838219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.838253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.838529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.838578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.838847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.838879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.839120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.839154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.839406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.839439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.839626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.839659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.839853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.839885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.840163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.840197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.840472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.840504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.840705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.840737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.840990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.841024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.841210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.841242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.841523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.841556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.841828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.841859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.842120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.842154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.842459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.842491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.842753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.139 [2024-11-19 11:38:59.842786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.139 qpair failed and we were unable to recover it. 00:27:46.139 [2024-11-19 11:38:59.843042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.843076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.843256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.843289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.843587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.843619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.843888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.843920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.844183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.844216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.844415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.844447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.844717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.844748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.845034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.845068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.845346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.845379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.845658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.845690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.845985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.846038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.846229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.846262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.846552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.846590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.846853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.846885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.847160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.847194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.847480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.847513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.847640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.847672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.847920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.847959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.848259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.848292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.848572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.848605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.848855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.848888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.849154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.849188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.849400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.849433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.849714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.849746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.849995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.850028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.850290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.850324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.850580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.850613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.850829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.850861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.851131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.851165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.851381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.851414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.851552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.851584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.851852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.851884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.140 [2024-11-19 11:38:59.852017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.140 [2024-11-19 11:38:59.852051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.140 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.852238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.852270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.852540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.852573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.852795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.852827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.853027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.853060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.853275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.853307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.853565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.853596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.853790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.853829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.854100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.854134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.854330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.854362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.854612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.854645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.854924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.854966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.855240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.855273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.855554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.855587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.855931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.855972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.856251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.856283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.856581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.856614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.856907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.856939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.857226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.857259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.857540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.857573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.857828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.857860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.858051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.858087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.858297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.858330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.858465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.858515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.858722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.858754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.859020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.859054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.859290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.859321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.859602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.859634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.859870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.859902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.860143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.860177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.860453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.860485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.860742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.860775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.141 [2024-11-19 11:38:59.861035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.141 [2024-11-19 11:38:59.861070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.141 qpair failed and we were unable to recover it. 00:27:46.423 [2024-11-19 11:38:59.861370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.423 [2024-11-19 11:38:59.861403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.423 qpair failed and we were unable to recover it. 00:27:46.423 [2024-11-19 11:38:59.861670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.423 [2024-11-19 11:38:59.861704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.423 qpair failed and we were unable to recover it. 00:27:46.423 [2024-11-19 11:38:59.861977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.423 [2024-11-19 11:38:59.862011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.423 qpair failed and we were unable to recover it. 00:27:46.423 [2024-11-19 11:38:59.862299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.423 [2024-11-19 11:38:59.862331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.423 qpair failed and we were unable to recover it. 00:27:46.423 [2024-11-19 11:38:59.862631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.423 [2024-11-19 11:38:59.862662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.423 qpair failed and we were unable to recover it. 00:27:46.423 [2024-11-19 11:38:59.862856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.423 [2024-11-19 11:38:59.862888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.423 qpair failed and we were unable to recover it. 00:27:46.423 [2024-11-19 11:38:59.863153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.423 [2024-11-19 11:38:59.863187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.863344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.863376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.863577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.863609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.863883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.863916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.864077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.864110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.864237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.864270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.864486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.864519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.864788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.864820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.865095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.865142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.865392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.865425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.865616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.865647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.865799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.865831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.866122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.866156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.866388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.866420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.866734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.866766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.866969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.867003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.867285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.867318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.867618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.867650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.867915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.867957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.868242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.868275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.868480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.868512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.868712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.868744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.868896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.868928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.869206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.869239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.869422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.869454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.869593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.869626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.869875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.869907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.870211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.870246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.870514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.870546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.870811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.870843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.871142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.871176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.871381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.871413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.871642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.871674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.871892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.871924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.872214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.872247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.872534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.872567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.872844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.872877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.873072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.873105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.424 qpair failed and we were unable to recover it. 00:27:46.424 [2024-11-19 11:38:59.873361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.424 [2024-11-19 11:38:59.873393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.873609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.873642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.873836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.873868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.874129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.874163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.874416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.874449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.874748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.874781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.874982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.875016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.875273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.875306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.875559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.875591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.875849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.875882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.876067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.876106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.876306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.876339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.876608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.876640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.876823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.876856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.877075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.877109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.877306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.877339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.877605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.877637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.877935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.877976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.878182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.878215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.878469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.878501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.878754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.878786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.878989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.879023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.879298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.879330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.879608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.879640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.879833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.879865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.880152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.880186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.880369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.880400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.880651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.880684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.880982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.881016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.881248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.881281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.881564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.881596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.881882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.881914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.882150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.882184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.882311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.882343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.882614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.882646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.882920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.882962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.883186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.883219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.883500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.425 [2024-11-19 11:38:59.883533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.425 qpair failed and we were unable to recover it. 00:27:46.425 [2024-11-19 11:38:59.883733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.883765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.883972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.884006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.884327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.884360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.884567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.884599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.884876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.884908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.885182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.885216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.885426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.885460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.885726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.885759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.886046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.886080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.886362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.886394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.886592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.886624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.886875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.886908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.887110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.887150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.887431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.887463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.887766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.887798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.888080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.888113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.888319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.888352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.888630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.888662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.888959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.888993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.889263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.889296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.889584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.889617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.889822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.889854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.890133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.890167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.890418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.890451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.890754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.890786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.891077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.891111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.891398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.891432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.891653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.891685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.891816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.891848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.892035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.892069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.892346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.892379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.892631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.892664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.892930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.892986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.893277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.893309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.893493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.893526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.893778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.893811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.894071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.894105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.894404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.426 [2024-11-19 11:38:59.894435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.426 qpair failed and we were unable to recover it. 00:27:46.426 [2024-11-19 11:38:59.894633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.894665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.894901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.894934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.895087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.895120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.895301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.895333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.895533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.895565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.895742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.895774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.895999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.896033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.896310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.896342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.896626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.896659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.896883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.896916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.897125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.897159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.897368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.897400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.897709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.897741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.897955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.897988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.898269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.898309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.898535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.898568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.898841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.898873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.899058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.899093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.899276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.899308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.899533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.899565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.899759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.899791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.900075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.900109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.900389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.900421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.900709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.900742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.901018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.901052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.901280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.901313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.901597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.901630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.901810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.901843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.902131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.902165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.902461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.902494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.902690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.902721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.902983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.903017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.903319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.903352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.903578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.903610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.903758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.903789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.904074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.904108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.904430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.904462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.904682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.427 [2024-11-19 11:38:59.904714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.427 qpair failed and we were unable to recover it. 00:27:46.427 [2024-11-19 11:38:59.904990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.905023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.905284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.905318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.905449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.905481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.905668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.905702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.905927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.905968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.906177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.906210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.906490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.906523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.906663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.906694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.906998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.907032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.907238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.907270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.907454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.907486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.907756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.907789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.907982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.908016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.908300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.908333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.908541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.908573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.908856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.908889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.909171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.909212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.909411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.909443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.909673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.909706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.909905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.909938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.910087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.910121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.910345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.910379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.910502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.910535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.910823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.910856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.911133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.911166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.911448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.911480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.911768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.911800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.912025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.912058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.912336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.912370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.912661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.912695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.912941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.912985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.428 qpair failed and we were unable to recover it. 00:27:46.428 [2024-11-19 11:38:59.913289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.428 [2024-11-19 11:38:59.913322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.913619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.913652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.913903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.913935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.914171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.914204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.914458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.914491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.914753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.914785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.914970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.915005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.915196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.915228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.915424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.915456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.915758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.915790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.916083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.916117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.916335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.916367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.916601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.916634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.916887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.916919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.917151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.917185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.917366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.917398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.917653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.917685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.917935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.917978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.918284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.918318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.918567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.918599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.918881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.918913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.919205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.919239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.919456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.919490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.919692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.919725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.919909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.919941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.920257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.920302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.920488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.920520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.920812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.920844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.921120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.921155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.921398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.921430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.921756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.921789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.922095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.922130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.922309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.922342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.922669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.922701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.922898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.922931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.923242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.923275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.923497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.923530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.923787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.923820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.429 [2024-11-19 11:38:59.923969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.429 [2024-11-19 11:38:59.924005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.429 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.924320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.924353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.924511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.924545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.924703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.924736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.925021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.925056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.925179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.925213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.925471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.925504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.925783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.925815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.926032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.926067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.926317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.926348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.926656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.926688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.926959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.926993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.927228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.927262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.927464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.927496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.927770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.927805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.928003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.928036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.928246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.928278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.928427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.928460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.928707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.928739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.928936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.928981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.929138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.929171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.929379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.929410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.929677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.929710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.929973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.930009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.930208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.930240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.930497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.930528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.930782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.930815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.931002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.931043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.931255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.931288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.931432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.931465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.931686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.931719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.931853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.931885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.932165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.932200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.932402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.932434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.932656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.932689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.932973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.933007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.933211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.933244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.933465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.933497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.430 [2024-11-19 11:38:59.933774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.430 [2024-11-19 11:38:59.933807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.430 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.934032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.934066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.934347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.934380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.934607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.934641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.934896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.934928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.935139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.935171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.935442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.935474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.935729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.935761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.935965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.935998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.936229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.936261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.936467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.936499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.936756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.936788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.937085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.937120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.937389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.937420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.937571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.937603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.937827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.937859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.938141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.938177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.938380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.938412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.938580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.938612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.938840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.938872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.939156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.939191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.939397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.939430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.939696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.939728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.939908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.939940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.940231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.940264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.940398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.940430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.940733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.940766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.941039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.941073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.941308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.941340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.941500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.941538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.941721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.941753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.941971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.942004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.942228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.942261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.942485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.942517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.942795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.942827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.943096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.943130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.943343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.943374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.943511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.943543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.431 [2024-11-19 11:38:59.943768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.431 [2024-11-19 11:38:59.943801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.431 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.944055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.944089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.944350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.944382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.944533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.944565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.944870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.944903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.945093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.945127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.945385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.945417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.945747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.945779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.946036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.946071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.946261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.946293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.946501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.946533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.946802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.946834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.947107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.947141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.947366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.947399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.947567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.947599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.947850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.947883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.948165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.948198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.948475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.948507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.948709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.948741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.949003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.949037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.949250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.949282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.949487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.949519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.949738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.949770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.950024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.950059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.950352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.950384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.950694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.950725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.950920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.950963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.951283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.951315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.951597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.951629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.951960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.951994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.952205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.952238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.952417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.952455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.952737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.952769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.953033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.953067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.953376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.432 [2024-11-19 11:38:59.953409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.432 qpair failed and we were unable to recover it. 00:27:46.432 [2024-11-19 11:38:59.953698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.953730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.954010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.954044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.954250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.954282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.954560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.954592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.954794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.954825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.955095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.955129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.955333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.955366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.955619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.955651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.955929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.955971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.956173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.956205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.956357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.956389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.956699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.956733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.957016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.957049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.957272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.957304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.957558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.957591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.957850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.957882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.958084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.958118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.958304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.958336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.958557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.958589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.958865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.958898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.959050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.959083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.959278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.959310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.959462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.959494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.959791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.959824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.960106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.960139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.960325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.960357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.960635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.960666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.960944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.960986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.961205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.961237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.961467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.961499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.961822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.961854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.962051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.962085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.962280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.962311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.962511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.962543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.962815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.962847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.963133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.963167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.963445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.963483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.963777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.433 [2024-11-19 11:38:59.963809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.433 qpair failed and we were unable to recover it. 00:27:46.433 [2024-11-19 11:38:59.963998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.964030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.964259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.964292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.964596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.964629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.964841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.964875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.965057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.965091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.965298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.965331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.965603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.965635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.965920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.965972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.966209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.966241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.966446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.966478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.966700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.966733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.966988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.967022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.967307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.967339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.967643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.967675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.967939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.967980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.968196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.968228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.968432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.968464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.968732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.968763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.969021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.969055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.969278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.969311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.969579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.969612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.969866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.969898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.970139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.970173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.970370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.970402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.970541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.970573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.970727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.970760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.970943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.970987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.971305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.971338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.971641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.971673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.971870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.971902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.972137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.972172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.972445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.972477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.972765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.972797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.973021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.973055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.973271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.973303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.973418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.973450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.973585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.973617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.973874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.434 [2024-11-19 11:38:59.973906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.434 qpair failed and we were unable to recover it. 00:27:46.434 [2024-11-19 11:38:59.974209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.974254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.974513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.974546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.974838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.974869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.975147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.975181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.975389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.975422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.975603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.975635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.975919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.975959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.976104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.976137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.976414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.976446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.976702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.976734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.976984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.977018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.977202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.977234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.977515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.977547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.977768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.977801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.977926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.977968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.978243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.978276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.978555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.978588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.978877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.978909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.979213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.979248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.979451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.979483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.979768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.979800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.979997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.980031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.980329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.980361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.980651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.980683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.980873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.980905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.981197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.981231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.981509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.981541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.981833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.981866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.982137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.982171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.982447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.982479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.982773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.982806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.983081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.983115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.983321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.983352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.983607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.983640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.983891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.983922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.984239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.984272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.984556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.984588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.984788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.435 [2024-11-19 11:38:59.984820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.435 qpair failed and we were unable to recover it. 00:27:46.435 [2024-11-19 11:38:59.985001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.985035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.985309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.985341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.985534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.985572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.985820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.985852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.986038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.986073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.986287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.986320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.986521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.986554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.986830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.986864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.987118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.987152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.987454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.987488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.987770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.987802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.988062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.988096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.988316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.988348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.988605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.988637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.988893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.988925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.989146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.989179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.989379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.989411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.989683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.989716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.989989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.990024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.990164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.990197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.990452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.990483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.990787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.990820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.991110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.991144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.991420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.991454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.991742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.991774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.992052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.992086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.992293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.992324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.992522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.992553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.992826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.992857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.992986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.993025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.993283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.993314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.993595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.993628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.993881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.993913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.994044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae8af0 (9): Bad file descriptor 00:27:46.436 [2024-11-19 11:38:59.994522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.994600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.994887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.994924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.995234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.995270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.995568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.995602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-11-19 11:38:59.995866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.436 [2024-11-19 11:38:59.995900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.996201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.996237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.996422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.996455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.996732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.996765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.996985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.997019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.997311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.997350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.997621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.997654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.997855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.997888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.998185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.998221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.998431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.998462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.998650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.998681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.998887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.998920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.999213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.999247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.999380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.999411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:38:59.999714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:38:59.999745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.000022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.000057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.000254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.000287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.000541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.000574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.000883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.000968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.001222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.001268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.001499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.001546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.001888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.001942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.002214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.002264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.002527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.002575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.002814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.002848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.003128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.003164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.003360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.003393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.003540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.003573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.003755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.003789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.004047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.004082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.004341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.004376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.004594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.004628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.004892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.004928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.005232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.005266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-11-19 11:39:00.005527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.437 [2024-11-19 11:39:00.005561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.005817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.005851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.006060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.006096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.006376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.006411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.006655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.006688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.006980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.007016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.007223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.007256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.007462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.007495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.007802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.007836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.008109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.008145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.008306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.008341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.008635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.008669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.008963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.009000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.009269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.009303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.009562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.009596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.009895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.009929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.010213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.010249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.010549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.010582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.010860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.010894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.011112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.011146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.011431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.011465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.011684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.011718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.011982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.012016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.012224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.012258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.012521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.012586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.012769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.012819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.013051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.013096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.013392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.013436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.013745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.013790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.014059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.014106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.014370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.014429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.014685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.014749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.015119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.015185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.015394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.015480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.015814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.015940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.016535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.016624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.016873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.016918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.017205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.017243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.438 [2024-11-19 11:39:00.017476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.438 [2024-11-19 11:39:00.017509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.438 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.017713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.017746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.017982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.018018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.018252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.018285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.018492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.018526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.018812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.018845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.019085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.019122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.019334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.019367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.019590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.019624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.019906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.019938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.020145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.020179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.020433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.020468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.020806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.020839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.021035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.021077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.021326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.021360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.021518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.021551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.021759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.021793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.022004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.022038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.022203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.022236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.022392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.022426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.022676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.022710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.022913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.022973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.023232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.023266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.023488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.023521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.023780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.023813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.024081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.024115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.024321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.024354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.024641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.024676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.024880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.024913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.025104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.025139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.025374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.025409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.025661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.025694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.025897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.025929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.026081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.026115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.026394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.026426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.026640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.026674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.026887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.026919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.027231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.027265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.027415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.439 [2024-11-19 11:39:00.027448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.439 qpair failed and we were unable to recover it. 00:27:46.439 [2024-11-19 11:39:00.027749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.027782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.027992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.028026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.028244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.028277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.028487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.028520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.028661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.028695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.028981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.029016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.029222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.029255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.029405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.029438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.029629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.029664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.029986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.030020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.030162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.030194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.030341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.030374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.030573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.030605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.030863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.030896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.031105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.031139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.031324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.031363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.031627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.031659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.031857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.031891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.032186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.032220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.032449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.032483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.032672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.032706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.032971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.033007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.033263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.033295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.033550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.033583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.033789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.033822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.034036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.034070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.034300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.034333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.034489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.034523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.034661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.034695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.034930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.034983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.035259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.035293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.035495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.035528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.035679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.035711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.035990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.036024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.036170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.036202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.036431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.036465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.036668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.036702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.036899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.036932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.440 [2024-11-19 11:39:00.037198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.440 [2024-11-19 11:39:00.037233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.440 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.037525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.037559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.037743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.037776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.038051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.038086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.038301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.038342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.038541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.038575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.038848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.038881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.039117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.039151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.039373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.039406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.039561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.039595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.039803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.039836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.040032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.040065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.040249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.040283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.040499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.040534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.040732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.040774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.041064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.041101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.041358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.041391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.041636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.041669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.041876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.041985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.042194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.042233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.042481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.042515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.042769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.042802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.043000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.043036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.043333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.043368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.043636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.043669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.043968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.044005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.044151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.044194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.044473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.044538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.044731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.044820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.045184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.045228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.045471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.045534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.045839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.045897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.046173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.046227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.046484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.046575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.046760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.046812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.047189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.047271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.047619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.047735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.441 qpair failed and we were unable to recover it. 00:27:46.441 [2024-11-19 11:39:00.048179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.441 [2024-11-19 11:39:00.048232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.048493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.048528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.048787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.048820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.049077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.049111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.049258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.049290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.049548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.049581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.049764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.049798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.050003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.050039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.050256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.050289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.050426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.050459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.050599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.050632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.050830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.050863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.051001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.051035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.051181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.051215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.051347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.051381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.051660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.051693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.051986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.052019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.052205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.052238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.052444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.052477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.052767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.052799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.053024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.053058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.053256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.053288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.053515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.053549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.053702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.053735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.053989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.054025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.054250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.054284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.054539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.054572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.054880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.054914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.055207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.055241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.055493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.055526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.055785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.055818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.056041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.056076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.056310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.056343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.056552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.056586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.056867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.056900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.057132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.057174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.442 qpair failed and we were unable to recover it. 00:27:46.442 [2024-11-19 11:39:00.057445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.442 [2024-11-19 11:39:00.057480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.057783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.057817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.058016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.058052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.058304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.058339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.058536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.058571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.058824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.058859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.059057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.059093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.059364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.059398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.059615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.059648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.059926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.059967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.060247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.060281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.060581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.060615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.060804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.060846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.061121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.061156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.061446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.061478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.061813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.061847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.062068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.062102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.062261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.062296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.062571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.062604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.062860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.062893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.063210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.063244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.063502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.063537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.063760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.063793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.064018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.064053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.064303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.064337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.064540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.064574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.064829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.064863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.065044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.065079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.065205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.065239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.065491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.065525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.065725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.065758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.065957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.065991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.066261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.066293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.066565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.066598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.066800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.066832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.066967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.067001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.067259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.067291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.067419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.443 [2024-11-19 11:39:00.067452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.443 qpair failed and we were unable to recover it. 00:27:46.443 [2024-11-19 11:39:00.067587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.067620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.067834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.067869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.068003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.068038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.068292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.068326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.068531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.068564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.068768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.068801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.068989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.069023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.069207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.069241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.069445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.069478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.069622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.069655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.069877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.069910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.070103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.070138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.070318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.070350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.070533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.070568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.070742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.070774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.070913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.070956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.071234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.071269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.071522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.071556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.071739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.071773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.071971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.072006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.072234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.072267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.072519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.072552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.072765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.072799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.072988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.073023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.073206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.073240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.073381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.073413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.073524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.073558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.073757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.073790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.073977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.074010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.074191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.074224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.074418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.074452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.074725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.074758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.074958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.074992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.075192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.075226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.075428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.075462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.075671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.075704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.075921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.075963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.076108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.444 [2024-11-19 11:39:00.076141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.444 qpair failed and we were unable to recover it. 00:27:46.444 [2024-11-19 11:39:00.076337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.076372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.076555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.076588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.076723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.076774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.076910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.076944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.077142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.077165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.077345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.077366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.077563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.077585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.077691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.077711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.077868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.077889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.078114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.078137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.078248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.078269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.078494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.078516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.078695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.078716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.078887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.078908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.079010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.079032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.079223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.079247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.079352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.079373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.079541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.079562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.079797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.079818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.080077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.080101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.080330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.080353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.080437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.080459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.080695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.080716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.080872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.080893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.081079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.081102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.081347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.081369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.081538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.081558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.081658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.081680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.081777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.081814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.082059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.082089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.082351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.082385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.082658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.082686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.082916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.082943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.083133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.083160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.083339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.083365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.083503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.083533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.083702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.083729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.084012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.445 [2024-11-19 11:39:00.084041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.445 qpair failed and we were unable to recover it. 00:27:46.445 [2024-11-19 11:39:00.084292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.084319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.084607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.084633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.084812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.084839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.085017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.085044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.085224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.085251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.085424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.085456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.085582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.085608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.085854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.085882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.086055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.086085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.086267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.086295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.086496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.086524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.086722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.086754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.086861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.086889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.087073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.087100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.087300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.087327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.087454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.087480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.087666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.087693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.087870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.087905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.088029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.088057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.088244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.088271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.088515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.088542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.088642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.088669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.088840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.088868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.088984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.089012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.089199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.089226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.089462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.089489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.089708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.089735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.089922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.089955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.090133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.090163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.090361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.090388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.090628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.090654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.090784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.090812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.091006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.091034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.091304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.091332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.446 [2024-11-19 11:39:00.091454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.446 [2024-11-19 11:39:00.091482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.446 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.091676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.091703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.091824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.091851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.092089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.092124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.092401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.092446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.092655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.092697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.092889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.092976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.093265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.093344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.093601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.093653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.093874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.093917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.094264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.094340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.094635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.094788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.095119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.095347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.095926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.096077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.096441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.096484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.096710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.096745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.097001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.097035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.097297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.097329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.097540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.097572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.097821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.097853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.098031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.098065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.098310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.098342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.098632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.098664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.098941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.098985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.099261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.099294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.099505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.099538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.099664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.099697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.099906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.099937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.100204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.100237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.100378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.100410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.100704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.100736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.100998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.101032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.101330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.101362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.101650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.101682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.101870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.101902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.102162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.102197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.102341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.102373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.102585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.102617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.447 [2024-11-19 11:39:00.102843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.447 [2024-11-19 11:39:00.102875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.447 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.103145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.103180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.103325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.103357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.103501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.103533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.103716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.103748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.103882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.103914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.104105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.104138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.104315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.104365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.104487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.104518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.104716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.104748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.104927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.104968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.105180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.105213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.105405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.105437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.105553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.105591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.105772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.105803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.106069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.106103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.106281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.106314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.106591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.106623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.106812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.106845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.107154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.107188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.107377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.107408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.107657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.107690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.107840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.107872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.108142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.108175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.108420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.108452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.108642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.108674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.108875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.108907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.109133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.109167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.109362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.109394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.109664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.109697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.109891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.109923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.110182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.110215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.110423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.110455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.110642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.110674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.110939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.110984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.111105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.111136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.111417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.111449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.111720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.111751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.448 [2024-11-19 11:39:00.112045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.448 [2024-11-19 11:39:00.112079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.448 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.112339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.112371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.112602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.112651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.112852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.112886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.113094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.113128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.113410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.113443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.113714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.113747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.114035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.114069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.114265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.114298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.114553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.114586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.114869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.114901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.115122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.115156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.115304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.115335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.115530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.115562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.115778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.115810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.116009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.116043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.116320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.116353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.116628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.116660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.116884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.116917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.117138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.117171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.117372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.117404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.117586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.117617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.117886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.117918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.118068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.118106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.118356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.118388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.118580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.118613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.118856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.118888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.119150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.119184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.119397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.119430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.119757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.119794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.120072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.120106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.120379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.120412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.120707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.120738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.121003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.121037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.121302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.121334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.121527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.121559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.121824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.121856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.122146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.122180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.122456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.449 [2024-11-19 11:39:00.122488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.449 qpair failed and we were unable to recover it. 00:27:46.449 [2024-11-19 11:39:00.122772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.122804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.122972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.123007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.123199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.123231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.123443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.123475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.123723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.123756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.124022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.124057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.124347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.124380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.124580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.124630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.124923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.124989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.125178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.125210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.125478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.125510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.125707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.125739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.125936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.125978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.126228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.126260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.126481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.126513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.126785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.126817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.127060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.127093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.127362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.127401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.127571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.127603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.127806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.127837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.127970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.128003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.128270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.128302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.128549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.128581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.128850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.128883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.129158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.129192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.129440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.129472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.129787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.129819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.130065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.130099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.130240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.130272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.130527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.130559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.130825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.130857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.131010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.131044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.131241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.131273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.131479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.131512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.131809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.131841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.132147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.132181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.132445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.132478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.132630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.132662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.450 [2024-11-19 11:39:00.132911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.450 [2024-11-19 11:39:00.132944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.450 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.133234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.133266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.133468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.133501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.133768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.133800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.133995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.134028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.134215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.134248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.134555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.134588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.134779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.134814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.135033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.135068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.135337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.135369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.135653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.135685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.135966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.136000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.136154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.136186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.136410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.136443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.136691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.136724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.136989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.137025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.137225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.137258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.137511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.137543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.137752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.137784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.137919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.137961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.138177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.138216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.138429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.138462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.138757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.138790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.139080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.139114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.139317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.139349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.139598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.139630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.139878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.139910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.140125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.140159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.140338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.140371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.140592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.140624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.140869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.140901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.141153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.141187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.141377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.141410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.141666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.141698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.451 qpair failed and we were unable to recover it. 00:27:46.451 [2024-11-19 11:39:00.142002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.451 [2024-11-19 11:39:00.142037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.142177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.142209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.142416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.142449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.142656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.142689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.142974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.143007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.143156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.143189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.143404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.143436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.143564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.143597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.143850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.143883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.144153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.144188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.144380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.144413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.144697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.144730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.144984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.145019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.145215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.145254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.145519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.145552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.145681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.145714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.145933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.145977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.146154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.146187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.146458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.146490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.146799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.146832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.147100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.147135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.147425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.147457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.147727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.147760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.147883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.147916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.148205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.148239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.148418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.148451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.148657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.148689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.148894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.148928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.149148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.149182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.149366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.149399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.149679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.149712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.149930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.149973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.150224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.150257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.150403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.150436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.150656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.150688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.150968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.151002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.151273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.151306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.151525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.151558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.151692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.452 [2024-11-19 11:39:00.151724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.452 qpair failed and we were unable to recover it. 00:27:46.452 [2024-11-19 11:39:00.151941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.151986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.152113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.152147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.152363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.152396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.152695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.152728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.152919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.152974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.153181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.153215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.153401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.153434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.153634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.153667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.153938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.153985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.154108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.154141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.154306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.154338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.154601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.154634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.154860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.154893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.155103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.155137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.155401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.155434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.155699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.155738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.155972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.156007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.156197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.156230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.156488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.156522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.156784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.156816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.157023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.157057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.157349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.157383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.157604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.157637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.157842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.157875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.158147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.158182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.158380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.158413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.158624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.158657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.158865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.158898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.159110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.159143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.159269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.159303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.159494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.159528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.159643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.159676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.159793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.159825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.160049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.160083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.160238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.160272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.160464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.160497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.160615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.160648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.160865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.160899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.453 qpair failed and we were unable to recover it. 00:27:46.453 [2024-11-19 11:39:00.161191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.453 [2024-11-19 11:39:00.161225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.161496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.161529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.161684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.161717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.161846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.161879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.162105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.162145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.162343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.162376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.162563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.162597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.162782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.162815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.162944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.162989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.163185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.163218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.163349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.163382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.163507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.163540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.163741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.163774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.164049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.164084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.164334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.164367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.164518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.164550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.164681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.164714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.164832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.164866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.165138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.165217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.165372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.165410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.165619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.165653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.165784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.165817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.165973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.166007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.166195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.166228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.166429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.166462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.166738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.166771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.166881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.166914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.167121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.167159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.167347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.167381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.167583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.167616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.167855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.167887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.168034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.168079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.168224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.168257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.168465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.168498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.168680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.168713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.168992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.169027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.169166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.169199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.169392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.169424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.454 [2024-11-19 11:39:00.169619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.454 [2024-11-19 11:39:00.169652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.454 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.169874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.169908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.170174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.170208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.170334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.170368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.170657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.170690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.170828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.170861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.171051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.171085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.171364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.171402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.171603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.171636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.171788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.171820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.172100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.172136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.172324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.172356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.172501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.172535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.172810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.172843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.172980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.173014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.173135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.173168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.173359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.173392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.173578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.173610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.173794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.173826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.174080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.174115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.174392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.174434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.174667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.174700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.174959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.174994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.175121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.175154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.175286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.175319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.175520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.175553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.175745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.175776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.175980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.176014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.176163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.176194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.176395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.176427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.176571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.176603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.176797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.176830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.177088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.177121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.177262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.177295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.177498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.455 [2024-11-19 11:39:00.177530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.455 qpair failed and we were unable to recover it. 00:27:46.455 [2024-11-19 11:39:00.177730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-11-19 11:39:00.177762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-11-19 11:39:00.177984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-11-19 11:39:00.178018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-11-19 11:39:00.178225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-11-19 11:39:00.178257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-11-19 11:39:00.178394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-11-19 11:39:00.178426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-11-19 11:39:00.178656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-11-19 11:39:00.178689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.456 [2024-11-19 11:39:00.178891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.456 [2024-11-19 11:39:00.178924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.456 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.179147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.179182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.179433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.179467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.179725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.179757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.179970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.180005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.180272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.180304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.180585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.180617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.180847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.180881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.181021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.181056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.181259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.181290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.181514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.181548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.181798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.181830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.181992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.182027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.182220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.182252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.182514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.182548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.182730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.182762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.182969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.183003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.183222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.183254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-11-19 11:39:00.183404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-11-19 11:39:00.183437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.183577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.183609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.183821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.183861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.184081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.184114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.184239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.184272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.184535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.184567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.184843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.184875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.185058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.185092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.185363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.185396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.185667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.185700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.185993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.186028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.186322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.186354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.186644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.186677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.186881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.186913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.187135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.187170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.187397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.187429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.187658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.187691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.187824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.187855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.188059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.188095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.188374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.188407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.188604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.188637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.188890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.188922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.189215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.189249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.189441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.189474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.189797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.189830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.190016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.190049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.190184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.190217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.190404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.190436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.190704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.190736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.190924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.190967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.191178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.191211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.191492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.191524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.191761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.191793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.192080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.192116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.192396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.192429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.192693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.192725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.193026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.193062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-11-19 11:39:00.193326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-11-19 11:39:00.193358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.193536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.193568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.193822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.193854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.194037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.194071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.194198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.194230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.194510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.194549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.194808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.194841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.194989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.195024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.195233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.195265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.195473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.195506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.195725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.195757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.196036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.196071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.196349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.196382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.196607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.196640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.196836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.196868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.197077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.197112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.197314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.197347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.197597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.197630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.197910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.197942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.198173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.198207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.198405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.198437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.198636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.198668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.198918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.198962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.199155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.199187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.199385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.199416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.199629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.199661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.199865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.199897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.200111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.200144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.200346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.200378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.200561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.200593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.200791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.200823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.201106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.201140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.201453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.201486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.201741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.201774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.201998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.202031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.202313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.202346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.202556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.202588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.202852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.202885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-11-19 11:39:00.203125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-11-19 11:39:00.203159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.203416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.203449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.203586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.203618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.203803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.203836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.204048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.204083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.204291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.204323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.204628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.204661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.204846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.204878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.205182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.205217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.205522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.205555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.205766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.205798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.206052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.206086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.206300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.206333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.206615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.206650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.206906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.206938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.207179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.207212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.207340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.207371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.207668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.207701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.207965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.207999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.208214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.208246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.208533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.208565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.208714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.208747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.208930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.208972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.209258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.209292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.209437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.209471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.209672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.209706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.210000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.210035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.210241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.210274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.210409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.210441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.210675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.210708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.210998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.211032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.211290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.211323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.211444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.211476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.211666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.211700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.211984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.212025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.212226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.212259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.212525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.212558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.212781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.212814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-11-19 11:39:00.213018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-11-19 11:39:00.213052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.213256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.213290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.213437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.213470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.213747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.213782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.214084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.214119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.214277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.214310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.214590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.214623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.214879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.214913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.215161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.215195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.215431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.215467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.215674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.215707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.215938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.215982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.216193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.216226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.216380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.216412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.216668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.216700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.216915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.216958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.217121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.217153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.217362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.217395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.217550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.217582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.217714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.217748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.218009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.218043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.218321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.218354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.218581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.218614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.218875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.218908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.219103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.219137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.219366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.219399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.219585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.219618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.219853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.219886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.220092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.220126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.220335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.220369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.220599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.220631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.220836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.220869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.221109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.221144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.221302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.221335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.221543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.221576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.221796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.221829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.222036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.222077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.222234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-11-19 11:39:00.222267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-11-19 11:39:00.222461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.222495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.222709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.222744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.223026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.223060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.223282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.223316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.223591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.223624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.223830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.223863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.224139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.224173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.224321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.224355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.224578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.224610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.224864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.224896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.225039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.225074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.225280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.225312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.225441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.225475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.225696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.225730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.225985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.226020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.226325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.226359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.226569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.226602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.226805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.226838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.227039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.227074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.227230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.227263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.227473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.227506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.227713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.227745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.228031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.228066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.228275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.228307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.228442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.228475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.228689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.228723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.228850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.228884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.229161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.229195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.229406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.229439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.229647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.229680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.229907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-11-19 11:39:00.229940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-11-19 11:39:00.230169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.230204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.230328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.230360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.230508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.230541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.230679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.230711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.230930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.230979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.231124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.231157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.231274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.231307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.231549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.231593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.231718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.231751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.231855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.231888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.232060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.232095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.232242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.232276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.232468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.232502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.232687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.232719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.232862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.232895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.233099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.233133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.233319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.233354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.233493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.233525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.233784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.233818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.233972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.234008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.234280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.234315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.234578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.234611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.234744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.234777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.234895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.234929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.235080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.235115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.235320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.235353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.235611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.235644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.235786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.235819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.235935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.235982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.236109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.236142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.236271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.236304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.236541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.236575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-11-19 11:39:00.236791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-11-19 11:39:00.236824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.237007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.237042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.237191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.237224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.237343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.237377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.237495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.237528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.237709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.237742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.237945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.238005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.238211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.238244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.238439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.238472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.238628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.238661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.238854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.238887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.239034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.239069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.239189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.239221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.239356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.239390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.239595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.239629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.239754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.239793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.239985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.240021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.240152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.240187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.240308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.240342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.240548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.240582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.240718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.240752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.240884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.240918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.241129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.241165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.241355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.241389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.241589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.241622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.241891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.241924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.242053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.242086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.242292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.242325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.242614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.242647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.242781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.242813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.243022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.243057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.243185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.243218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.243365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.243398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.243536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.243567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.243772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.243806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.243987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.244022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-11-19 11:39:00.244212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-11-19 11:39:00.244245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.244372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.244404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.244663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.244695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.244899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.244933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.245076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.245109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.245227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.245259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.245480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.245513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.245642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.245676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.245854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.245887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.246030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.246065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.246264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.246298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.246448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.246481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.246684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.246717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.246859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.246892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.247091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.247125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.247245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.247278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.247481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.247513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.247626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.247660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.247788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.247821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.248004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.248047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.248323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.248355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.248494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.248528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.248654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.248686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.248793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.248826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.248961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.248996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.249185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.249218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.249406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.249438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.249555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.249588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.249879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.249911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.250079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.250118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.250300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.250332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.250444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.250478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.250689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.250722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.251017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.251052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.251167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.251201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.251385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.251416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.251702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.251736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.251921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.251965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-11-19 11:39:00.252171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-11-19 11:39:00.252212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.252445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.252477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.252677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.252711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.252836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.252867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.253122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.253157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.253340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.253372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.253568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.253602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.253734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.253767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.254043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.254078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.254221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.254254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.254532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.254565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.254759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.254790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.254920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.254963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.255085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.255117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.255323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.255356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.255609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.255641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.255824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.255858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.256055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.256089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.256341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.256374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.256626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.256659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.256784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.256815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.256972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.257013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.257124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.257157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.257354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.257386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.257577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.257609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.257811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.257843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.257969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.258002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.258198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.258231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.258512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.258544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.258796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.258828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.259029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.259062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.259201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.259234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.259362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.259394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-11-19 11:39:00.259579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-11-19 11:39:00.259612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.259739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.259772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.260067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.260102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.260399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.260432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.260582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.260613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.260913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.260945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.261140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.261173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.261477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.261510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.261809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.261842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.262187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.262222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.262438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.262470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.262750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.262783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.263086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.263119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.263383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.263416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.263637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.263669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.263908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.263941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.264126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.264159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.264309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.264342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.264549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.264581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.264834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.264865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.264985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.265020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.265132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.265164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.265354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.265386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.265516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.265548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.265822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.265854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.266036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.266071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.266213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.266244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.266460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.266493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.266694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.266733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.266877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.266909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.267131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.267165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.267442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.267476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.267622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.267653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.267855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.267887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.268183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.268216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.268502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.268535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.268732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.268764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.269020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.269055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-11-19 11:39:00.269310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-11-19 11:39:00.269341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.269612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.269645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.269894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.269926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.270131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.270164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.270285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.270317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.270597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.270629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.270904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.270936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.271155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.271188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.271395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.271427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.271667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.271698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.271887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.271919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.272141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.272175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2421424 Killed "${NVMF_APP[@]}" "$@" 00:27:46.746 [2024-11-19 11:39:00.272454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.272487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.272686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.272727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.272920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.272966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.273152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.273186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:46.746 [2024-11-19 11:39:00.273462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.273495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:46.746 [2024-11-19 11:39:00.273687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.273720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:46.746 [2024-11-19 11:39:00.273998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.274034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:46.746 [2024-11-19 11:39:00.274290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.274323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.746 [2024-11-19 11:39:00.274599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.274633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.274848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.274881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.275109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.275148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.275285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.275317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.275575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.275607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.275859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.275893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.276121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.276156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.276362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.276394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.276636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.276669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.276823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.276856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.277147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.277182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.277314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.277345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.277557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.277590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.277737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.277769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.278050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.278085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-11-19 11:39:00.278293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-11-19 11:39:00.278325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.278625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.278659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.278907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.278940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.279208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.279244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.279391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.279422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.279687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.279719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.279925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.279977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.280190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.280223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.280456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.280487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.280755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.280788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.281091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.281125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.281413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.281446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.281591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.281624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.281836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.281869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.282156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.282192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2422171 00:27:46.747 [2024-11-19 11:39:00.282477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.282513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2422171 00:27:46.747 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:46.747 [2024-11-19 11:39:00.282718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.282753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.282901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.282934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2422171 ']' 00:27:46.747 [2024-11-19 11:39:00.283125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.283161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.283318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.283353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.283645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.747 [2024-11-19 11:39:00.283678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.283938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.747 [2024-11-19 11:39:00.283998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.284202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.284235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.747 [2024-11-19 11:39:00.284510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.747 [2024-11-19 11:39:00.284546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.284833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.284866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.285165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.285199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.285355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.285387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.285588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.285621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.285814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.285849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.286094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.286128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.286326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.286358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.286644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.286688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.287005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.287042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.287265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-11-19 11:39:00.287299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-11-19 11:39:00.287525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.287559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.287757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.287792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.288011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.288046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.288253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.288288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.288439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.288473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.288616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.288651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.288936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.288986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.289116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.289157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.289379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.289412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.289620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.289653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.289944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.289992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.290149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.290183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.290329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.290363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.290600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.290632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.290837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.290872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.291133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.291168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.291322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.291356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.291564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.291597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.291879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.291912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.292122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.292158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.292310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.292343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.292587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.292620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.292903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.292938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.293161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.293194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.293323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.293356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.293633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.293665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.293942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.293989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.294205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.294238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.294498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.294532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.294807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.294842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.294999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.295036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.295252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.295285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.295431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.295465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.295686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.295718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.295836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.295871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.296110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.296144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.296370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.296404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.296619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-11-19 11:39:00.296651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-11-19 11:39:00.296799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.296834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.297028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.297063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.297292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.297325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.297523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.297555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.297831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.297866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.298134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.298168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.298485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.298518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.298800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.298833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.299059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.299097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.299382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.299420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.299674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.299708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.299984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.300020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.300325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.300359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.300504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.300537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.300816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.300850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.301071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.301106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.301218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.301251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.301531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.301565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.301834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.301868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.302092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.302127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.302284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.302318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.302600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.302634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.302820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.302854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.303004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.303039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.303249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.303283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.303442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.303476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.303699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.303733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.304022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.304058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.304216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.304250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.304408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.304440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.304632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.304666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.304957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.304993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.305144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.305178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-11-19 11:39:00.305336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-11-19 11:39:00.305369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.305582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.305617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.305895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.305928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.306214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.306249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.306396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.306429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.306708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.306742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.306936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.306983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.307192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.307227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.307357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.307392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.307515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.307550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.307754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.307787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.307994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.308030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.308212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.308245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.308443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.308477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.308722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.308755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.309043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.309080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.309354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.309394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.309613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.309645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.309898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.309932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.310152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.310187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.310306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.310338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.310568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.310601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.310890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.310923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.311073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.311107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.311319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.311351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.311551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.311584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.311770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.311803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.311963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.311998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.312196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.312229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.312444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.312479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.312681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.312715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.312855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.312889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.313033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.313067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.313211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.313244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.313389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.313422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.313654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.313688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.313897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.313931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.314065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-11-19 11:39:00.314101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-11-19 11:39:00.314216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.314249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.314390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.314424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.314639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.314671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.314795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.314830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.315085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.315120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.315264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.315299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.315586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.315618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.315816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.315849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.315971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.316006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.316193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.316226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.316448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.316481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.316781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.316815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.317101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.317135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.317282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.317316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.317510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.317544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.317821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.317854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.318056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.318090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.318221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.318256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.318410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.318448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.318577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.318610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.318822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.318856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.318978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.319023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.319293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.319327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.319456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.319491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.319712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.319744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.319940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.319986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.320169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.320202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.320457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.320491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.320746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.320778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.320907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.320940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.321085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.321118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.321235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.321268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.321473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.321506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.321689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.321723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.321846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.321879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.322009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.322043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.322324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.322357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.322556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-11-19 11:39:00.322589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-11-19 11:39:00.322776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.322809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.323016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.323052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.323253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.323285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.323477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.323510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.323792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.323825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.324011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.324047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.324176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.324208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.324378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.324455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.324623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.324661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.324847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.324882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.325083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.325120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.325311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.325345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.325506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.325539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.325750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.325784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.325990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.326025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.326216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.326250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.326537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.326570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.326775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.326808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.326987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.327022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.327152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.327187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.327412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.327446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.327650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.327684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.327833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.327867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.328083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.328118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.328234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.328267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.328551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.328584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.328716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.328751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.328963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.328998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.329268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.329302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.329482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.329524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.329800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.329833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.329973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.330009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.330137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.330171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.330289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.330322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.330516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.330556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.330776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.330809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.330967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.331002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.331256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.331290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-11-19 11:39:00.331429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-11-19 11:39:00.331462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.331592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.331624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.331819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.331852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.332004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.332038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.332241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.332275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.332397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.332430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.332646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.332679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.332871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.332904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.333103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.333136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.333316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.333348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.333495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.333528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.333650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.333682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.333873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.333906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.334143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.334177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.334308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.334342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.334549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.334582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.334723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.334757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.334896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.334928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.335065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.335100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.335157] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:46.753 [2024-11-19 11:39:00.335221] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.753 [2024-11-19 11:39:00.335297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.335332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.335536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.335567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.335841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.335878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.336069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.336111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.336387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.336421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.336617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.336650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.336857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.336890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.337096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.337131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.337409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.337444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.337635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.337669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.337802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.337835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.338037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.338072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.338285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.338320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.338542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.338576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.338777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.338811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.339030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.339066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.339205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.339239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.339565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.339598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-11-19 11:39:00.339880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-11-19 11:39:00.339914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.340160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.340238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.340470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.340507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.340792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.340825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.341125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.341161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.341366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.341399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.341652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.341685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.341878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.341913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.342058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.342092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.342231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.342263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.342545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.342577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.342777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.342810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.343013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.343064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.343212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.343244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.343550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.343583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.343700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.343732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.343876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.343909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.344120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.344154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.344345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.344378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.344513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.344546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.344743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.344777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.345029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.345065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.345317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.345350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.345623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.345656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.345880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.345915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.346051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.346085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.346287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.346320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.346453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.346487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.346614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.346646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.346781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.346814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.347004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.347040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.347156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.347190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.347373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.347407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.347600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.347634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-11-19 11:39:00.347777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-11-19 11:39:00.347810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.347945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.347993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.348258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.348293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.348474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.348508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.348807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.348841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.349130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.349164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.349309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.349343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.349600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.349635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.349822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.349855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.349966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.350001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.350224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.350257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.350381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.350414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.350617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.350651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.350785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.350819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.350942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.350994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.351191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.351224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.351420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.351454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.351572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.351605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.351726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.351766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.352082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.352117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.352345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.352379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.352576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.352608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.352794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.352832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.353085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.353120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.353299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.353332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.353601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.353635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.353766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.353800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.353943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.353989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.354239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.354272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.354456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.354490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.354695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.354729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.354854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.354886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.355099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.355134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.355274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.355309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.355429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.355461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.355670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.355703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.355910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.355943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.356087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-11-19 11:39:00.356120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-11-19 11:39:00.356311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.356343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.356604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.356638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.356829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.356861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.356986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.357020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.357146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.357179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.357387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.357420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.357571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.357605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.357828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.357861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.357991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.358026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.358228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.358261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.358472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.358504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.358639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.358672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.358861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.358894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.359058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.359106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.359290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.359323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.359507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.359540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.359673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.359706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.359886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.359919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.360081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.360116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.360319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.360351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.360459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.360498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.360716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.360750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.360993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.361028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.361227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.361260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.361420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.361452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.361583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.361615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.361885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.361918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.362059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.362094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.362312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.362346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.362577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.362610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.362741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.362773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.362887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.362921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.363073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.363106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.363354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.363387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.363520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.363554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.363822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.363855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.364125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-11-19 11:39:00.364159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-11-19 11:39:00.364294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.364326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.364544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.364577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.364780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.364812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.364926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.364971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.365173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.365206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.365379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.365411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.365598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.365631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.365901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.365935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.366077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.366111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.366238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.366271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.366456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.366534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.366687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.366724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.366853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.366887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.367083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.367119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.367263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.367296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.367405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.367437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.367694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.367726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.367908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.367939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.368227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.368261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.368482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.368514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.368707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.368740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.368861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.368893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.369170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.369205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.369389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.369423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.369567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.369599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.369789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.369821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.369935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.369979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.370170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.370201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.370311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.370344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.370611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.370643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.370789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.370820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.371021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.371057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.371183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.371214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.371356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.371390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.371500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.371531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.371653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.371685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.371870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.371901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.372144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-11-19 11:39:00.372185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-11-19 11:39:00.372369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.372402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.372617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.372650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.372931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.372974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.373177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.373209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.373336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.373367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.373569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.373602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.373820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.373851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.374027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.374061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.374281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.374314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.374443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.374476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.374666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.374698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.374828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.374859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.374996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.375031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.375166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.375199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.375401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.375433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.375547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.375580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.375773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.375804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.375987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.376021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.376200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.376232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.376524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.376556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.376667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.376699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.376893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.376925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.377131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.377165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.377277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.377309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.377497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.377529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.377722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.377754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.377966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.378007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.378135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.378166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.378364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.378397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.378550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.378582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.378777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.378810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.378943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.378987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.379117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.379149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-11-19 11:39:00.379368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-11-19 11:39:00.379399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.379604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.379636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.379911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.379944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.380219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.380253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.380381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.380413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.380541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.380573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.380694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.380726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.380987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.381060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.381290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.381338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.381485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.381518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.381764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.381797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.382002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.382038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.382224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.382255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.382389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.382421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.382717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.382749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.382875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.382907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.383103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.383136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.383268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.383300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.383486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.383518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.383707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.383740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.383864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.383905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.384090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.384123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.384265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.384297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.384399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.384431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.384639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.384671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.384866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.384898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.385197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.385230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.385345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.385377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.385492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.385524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.385708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.385740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.385994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.386028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.386273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.386306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.386554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.386587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.386782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.386815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.387071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.387105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.387348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.387381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.387562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.387595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.387787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-11-19 11:39:00.387820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-11-19 11:39:00.388008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.388040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.388289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.388322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.388535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.388568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.388856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.388889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.389091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.389125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.389310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.389343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.389464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.389496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.389690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.389723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.389899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.389964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.390227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.390301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.390519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.390556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.390804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.390837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.391082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.391120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.391317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.391350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.391528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.391560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.391687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.391721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.391934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.391976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.392244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.392277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.392396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.392428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.392672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.392704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.392829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.392859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.393122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.393157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.393289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.393332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.393509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.393541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.393726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.393759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.393938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.393983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.394176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.394214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.394409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.394442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.394558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.394590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.394767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.394810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.394936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.394979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.395175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.395209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.395328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.395362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.395557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.395591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.395727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.395759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.396002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.396036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.396297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.396330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.396448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-11-19 11:39:00.396480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-11-19 11:39:00.396687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.396720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.396909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.396940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.397143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.397176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.397358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.397390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.397676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.397707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.397922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.397960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.398097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.398128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.398373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.398405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.398670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.398701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.398884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.398916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.399051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.399084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.399392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.399464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.399702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.399738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.399944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.399988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.400178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.400210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.400337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.400368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.400507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.400539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.400724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.400756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.401028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.401060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.401181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.401213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.401333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.401365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.401617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.401649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.401874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.401906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.402040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.402073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.402197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.402239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.402375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.402407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.402685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.402717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.402907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.402938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.403047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.403080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.403209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.403241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.403395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.403426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.403627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.403659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.403879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.403909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.404028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.404062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.404244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.404275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.404463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.404494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.404788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.404819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.404940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-11-19 11:39:00.404982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-11-19 11:39:00.405174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.405206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.405461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.405493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.405678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.405710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.405820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.405852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.406009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.406042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.406194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.406227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.406331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.406362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.406486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.406521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.406727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.406759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.406878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.406909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.407089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.407122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.407307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.407339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.407469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.407500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.407725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.407797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.407923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.407970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.408160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.408191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.408311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.408342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.408542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.408574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.408793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.408825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.408981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.409014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.409202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.409235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.409358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.409390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.409587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.409619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.409812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.409844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.410035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.410069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.410275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.410308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.410555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.410587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.410794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.410827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.411021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.411054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.411175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.411207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.411393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.411426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.411545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.411577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.411688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.411720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.411833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.411865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.412067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.412100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.412216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.412249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.412380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.412413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.412513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-11-19 11:39:00.412546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-11-19 11:39:00.412673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.412706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.412888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.412920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.413040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.413074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.413205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.413238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.413425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.413457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.413571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.413603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.413712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.413746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.413863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.413896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.414094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.414129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.414232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.414264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.414464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.414495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.414686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.414719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.414856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.414889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.415094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.415128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.415341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.415374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.415583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.415621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.415795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.415827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.415967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.416000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.416118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.416151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.416334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.416366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.416497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.416530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.416661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.416694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.416807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.416838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.417033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.417067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.417259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.417293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.417402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.417434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.417557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.417589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.417814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.417849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.418029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.418063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.418202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.418235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.418368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.418400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.418555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.418589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.418704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.418736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-11-19 11:39:00.418861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-11-19 11:39:00.418892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.419070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.419105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.419292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.419324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.419586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.419618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.419795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.419827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.420021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.420054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.420256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.420289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.420419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.420452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.420570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.420603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.420734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.420767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.420967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.421001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.421195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.421227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.421350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.421381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.421492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.421525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.421708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.421740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.421916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.421964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.422077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.422115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.422223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.422265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.422404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.422436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.422555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.422587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.422713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.422745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.422921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.422970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.423091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.423129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.423313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.423344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.423384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.764 [2024-11-19 11:39:00.423456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.423488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.423669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.423700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.423807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.423838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.423972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.424006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.424116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.424156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.424351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.424383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.424574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.424605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.424722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.424754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.424883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.424914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.425047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.425096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.425262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.425297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.425408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.425450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.425561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.425593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.425744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-11-19 11:39:00.425777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-11-19 11:39:00.425890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.425922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.426177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.426211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.426457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.426491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.426665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.426696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.426928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.426973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.427183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.427216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.427366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.427399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.427573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.427606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.427853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.427885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.428036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.428070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.428184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.428216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.428419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.428452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.428624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.428656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.428783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.428815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.428940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.428986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.429109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.429142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.429434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.429466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.429665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.429696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.429873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.429906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.430107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.430140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.430271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.430303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.430436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.430469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.430650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.430682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.430790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.430822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.430968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.431002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.431129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.431163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.431281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.431313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.431431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.431463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.431587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.431619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.431751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.431783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.431970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.432004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.432128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.432161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.432292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.432323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.432458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.432491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.432680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.432713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.432908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.432941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.433066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.433099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.433214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.433247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.765 qpair failed and we were unable to recover it. 00:27:46.765 [2024-11-19 11:39:00.433452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.765 [2024-11-19 11:39:00.433495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.433711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.433746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.433883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.433916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.434053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.434092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.434207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.434239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.434350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.434381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.434564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.434596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.434713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.434745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.434973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.435010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.435185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.435218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.435409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.435443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.435576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.435610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.435729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.435761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.435865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.435910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.436044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.436076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.436207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.436240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.436360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.436393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.436523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.436555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.436746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.436778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.436894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.436926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.437060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.437093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.437216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.437247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.437370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.437401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.437503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.437535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.437645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.437677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.437853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.437885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.438008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.438042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.438231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.438264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.438375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.438410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.440341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.440404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.440534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.440575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.440775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.440807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.440977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.441012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.441136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.441169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.441413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.441445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.441556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.441588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.441712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.441744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.766 [2024-11-19 11:39:00.441916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.766 [2024-11-19 11:39:00.441967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.766 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.442099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.442132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.442244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.442276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.442408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.442447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.442555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.442588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.442785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.442818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.442941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.442988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.443115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.443148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.443258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.443290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.443412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.443443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.443650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.443682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.443878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.443910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.444095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.444128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.444253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.444286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.444463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.444495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.444605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.444637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.444825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.444856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.444984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.445019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.445130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.445162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.445339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.445371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.445488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.445522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.445655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.445687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.445860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.445893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.446029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.446062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.446174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.446207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.446408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.446440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.446566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.446599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.446716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.446748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.446862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.446895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.447079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.447113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.447287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.447320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.447466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.447497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.447621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.447654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.447782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.447814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.767 qpair failed and we were unable to recover it. 00:27:46.767 [2024-11-19 11:39:00.447936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.767 [2024-11-19 11:39:00.447978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.448098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.448130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.448238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.448270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.448392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.448424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.448607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.448648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.448822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.448853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.448971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.449006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.449117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.449149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.449276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.449310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.449433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.449465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.449595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.449635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.449816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.449849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.450038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.450075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.450210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.450242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.450355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.450386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.450502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.450533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.450666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.450698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.450874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.450906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.451018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.451052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.451235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.451268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.451374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.451406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.451529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.451560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.451671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.451703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.451806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.451838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.451971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.452005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.452137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.452176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.452296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.452328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.452445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.452478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.452722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.452755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.452873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.452904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.453100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.453135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.453312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.453345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.453482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.453513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.453618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.453651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.453760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.453792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.453909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.453941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.454062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.454100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.454322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.768 [2024-11-19 11:39:00.454353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.768 qpair failed and we were unable to recover it. 00:27:46.768 [2024-11-19 11:39:00.454525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.454558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.454669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.454700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.454809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.454840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.454956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.454990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.455118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.455149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.455259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.455291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.455402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.455439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.455634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.455668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.455858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.455889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.456032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.456065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.456178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.456211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.456323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.456354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.456536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.456574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.456751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.456784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.456888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.456918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.457051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.457096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.457220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.457252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.457361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.457393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.457510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.457542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.457651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.457682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.457799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.457831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.458023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.458058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.458181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.458214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.458400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.458433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.458546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.458578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.458692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.458724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.458852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.458884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.459007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.459041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.459160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.459193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.459313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.459346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.459454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.459488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.459595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.459628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.459799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.459831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.459932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.459974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.460149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.460181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.460287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.460320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.460438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.460470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.769 [2024-11-19 11:39:00.460686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.769 [2024-11-19 11:39:00.460719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.769 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.460831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.460869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.461067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.461100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.461228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.461261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.461378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.461411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.461585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.461616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.461793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.461826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.462000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.462034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.462240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.462273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.462379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.462410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.462521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.462554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.462668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.462699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.462917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.462961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.463089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.463121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.463363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.463396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.463522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.463559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.463678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.463710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.463844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.463876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.464060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.464095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.464223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.464264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.464369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.464401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.464525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.464558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.464671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.464702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.464875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.464908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.465120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.465154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.465325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.465357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.465462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.465494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.465584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.770 [2024-11-19 11:39:00.465594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.465612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.770 [2024-11-19 11:39:00.465621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.770 [2024-11-19 11:39:00.465626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b9[2024-11-19 11:39:00.465632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.770 0 with addr=10.0.0.2, port=4420 00:27:46.770 [2024-11-19 11:39:00.465641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.465753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.465785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.465900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.465931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.466046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.466077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.466211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.466242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.466414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.466446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.466622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.466654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.466840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.466871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.467040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.770 [2024-11-19 11:39:00.467074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.770 qpair failed and we were unable to recover it. 00:27:46.770 [2024-11-19 11:39:00.467142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:46.771 [2024-11-19 11:39:00.467278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.467311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:46.771 [2024-11-19 11:39:00.467231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.467339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:46.771 [2024-11-19 11:39:00.467449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.467486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 [2024-11-19 11:39:00.467340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.467669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.467700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.467836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.467869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.467984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.468018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.468202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.468234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.468350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.468383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.468496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.468528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.468774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.468809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.468998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.469033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.469166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.469199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.469313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.469347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.469459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.469491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.469676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.469710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.469827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.469858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.470002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.470037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.470171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.470210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.470462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.470495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.470680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.470714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.471027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.471064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.471189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.471223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.471355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.471387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.471566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.471598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.471757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.471790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.471982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.472017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.472146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.472178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.472356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.472388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.472506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.472538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.472645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.472678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.472868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.472901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.473043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.473076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.473252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.473286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.473410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.473443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.473562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.473594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.473717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.473750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.473873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.473906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.771 [2024-11-19 11:39:00.474039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.771 [2024-11-19 11:39:00.474074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.771 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.474187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.474220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.474339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.474371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.474487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.474520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.474704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.474737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.474977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.475010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.475255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.475288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.475411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.475454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.475635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.475668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.475789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.475821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.476071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.476104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.476232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.476265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.476391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.476423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.476629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.476663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.476794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.476826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.477023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.477057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.477247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.477280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.477396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.477429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.477666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.477700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.477881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.477914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.478102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.478135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.478319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.478352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.478486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.478521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.478645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.478677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.478924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.478969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.479158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.479191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.479370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.479402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.479590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.479622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.479817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.479848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.479972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.480005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.480129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.480161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.480375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.480407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.480530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.480561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.480675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.480707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.772 [2024-11-19 11:39:00.480824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.772 [2024-11-19 11:39:00.480855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.772 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.481062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.481096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.481272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.481305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.481418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.481449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.481557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.481588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.481706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.481738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.481847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.481879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.482002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.482035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.482269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.482302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.482436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.482467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.482660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.482691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.482827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.482861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.482973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.483007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.483187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.483218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.483407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.483446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.483640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.483672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.483833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.483864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.483984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.484017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.484136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.484167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.484343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.484375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.484502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.484535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.484661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.484693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.484894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.484928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.485065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.485098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.485286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.485319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.485441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.485474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.485611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.485644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.485818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.485851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.485992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.486027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.486143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.486176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.486402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.486434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.486649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.486681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.486861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.486893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.487033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.487067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.487227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.487258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.487378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.487410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.487598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.487630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.487763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.487795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.488033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.773 [2024-11-19 11:39:00.488069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.773 qpair failed and we were unable to recover it. 00:27:46.773 [2024-11-19 11:39:00.488176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.488209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.488327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.488358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.488490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.488522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.488726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.488760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.488873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.488905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.489078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.489111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.489300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.489332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.489447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.489479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.489693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.489727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.489921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.489961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.490079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.490112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.490289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.490322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.490496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.490528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.490657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.490689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.490872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.490905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.491038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.491071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.491238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.491277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.491394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.491426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.491543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.491577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.491756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.491787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.491904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.491936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.492078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.492111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.492236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.492268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.492383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.492415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.492538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.492570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:46.774 qpair failed and we were unable to recover it. 00:27:46.774 [2024-11-19 11:39:00.492685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.774 [2024-11-19 11:39:00.492716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-11-19 11:39:00.492906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-11-19 11:39:00.492941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-11-19 11:39:00.493124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-11-19 11:39:00.493157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-11-19 11:39:00.493271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-11-19 11:39:00.493303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-11-19 11:39:00.493557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-11-19 11:39:00.493592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-11-19 11:39:00.493799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-11-19 11:39:00.493833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-11-19 11:39:00.494005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.494041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.494235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.494268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.494409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.494440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.494562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.494595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.494791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.494823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.494938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.494980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.495158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.495190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.495317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.495350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.495509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.495542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.495658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.495691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.495868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.495901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.496032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.496066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.496185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.496224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.496352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.496385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.496511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.496542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.496715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.496747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.497019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.497052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.497189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.497222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.497408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.497440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.497565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.497598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.497725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.497756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.497924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.497969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.498080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.498113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.498300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.498333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.498448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.498479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.498607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.498641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.498777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.498838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.499023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.499057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.499239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.499272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.499455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.499487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.499625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.499657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.499864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.499896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.500018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.500051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.500169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.500200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.500332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.500363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.500589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.500620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.500750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.500781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-11-19 11:39:00.500968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-11-19 11:39:00.501001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.501227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.501259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.501367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.501408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.501521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.501552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.501724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.501755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.501893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.501924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.502059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.502090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.502210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.502241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.502412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.502444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.502647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.502678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.502822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.502853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.503055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.503089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.503209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.503243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.503367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.503400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.503519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.503552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.503682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.503714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.503905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.503937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.504123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.504156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.504270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.504301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.504421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.504452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.504578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.504610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.504882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.504913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.505036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.505069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.505184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.505214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.505346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.505376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.505556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.505587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.505702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.505732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.505913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.505945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.506148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.506180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.506315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.506366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.506564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.506597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.506801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.506833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.507035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.507070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.507187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.507219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.507326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.507357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.507473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.507504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.507734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.507767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.508067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.508100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-11-19 11:39:00.508301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-11-19 11:39:00.508333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.508448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.508480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.508593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.508625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.508830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.508862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.508984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.509026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.509160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.509192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.509338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.509371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.509553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.509585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.509712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.509745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.509966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.510000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.510123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.510157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.510333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.510364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.510583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.510617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.510757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.510789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.511076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.511111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.511285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.511318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.511503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.511535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.511651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.511683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.511878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.511910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.512116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.512149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.512412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.512445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.512655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.512688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.512879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.512912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.513113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.513146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.513277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.513309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.513448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.513480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.513682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.513714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.513925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.513967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.514114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.514146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.514290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.514322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.514514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.514546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.514842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.514874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.515139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.515173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.515299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.515331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.515550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.515583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.515856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.515889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.516013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.516047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-11-19 11:39:00.516186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-11-19 11:39:00.516219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.516357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.516390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.516505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.516537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.516719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.516752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.516941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.516984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.517197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.517231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.517420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.517453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.517743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.517783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.517998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.518032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.518247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.518279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.518530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.518563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.518745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.518777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.519075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.519112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.519287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.519323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.519521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.519557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.519794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.519829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.519976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.520013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.520226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.520263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.520393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.520426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.520670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.520702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.520958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.520994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.521147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.521180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.521367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.521401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.521593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.521628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.521756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.521790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.521932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.521986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.522175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.522208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.522398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.522431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.522551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.522583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.522709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.522741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.522865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.522897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.523087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.523121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.523229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.523263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.523384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.523417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.523621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.523679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.523812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.523846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.523986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.524021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.524138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.524170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-11-19 11:39:00.524286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-11-19 11:39:00.524317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.524565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.524598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.524728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.524759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.524881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.524914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.525105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.525139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.525321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.525354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.525477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.525508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.525630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.525663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.525772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.525804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.525911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.525943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.526072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.526106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.526230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.526261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.526368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.526400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.526529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.526563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.526676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.526707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.526820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.526853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.527039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.527071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.527189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.527221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.527337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.527369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.527577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.527610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.527831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.527862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.528110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.528144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.528279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.528311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.528651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.528688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.528929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.528971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.529184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.529215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.529351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.529384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.529639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.529670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.529936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.529979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.530218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.530250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.530427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.530460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.530661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.530692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.530801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-11-19 11:39:00.530833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-11-19 11:39:00.530968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.531001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.531119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.531151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.531342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.531373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.531586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.531618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.531807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.531838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.532011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.532045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.532238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.532271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.532445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.532476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.532678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.532711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.532903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.532934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.533137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.533172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.533359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.533391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.533582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.533614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.533812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.533843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.534072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.534104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.534213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.534245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.534449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.534480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.534593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.534625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.534759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.534791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.535072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.535105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.535235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.535268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.535469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.535500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.535617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.535649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.535823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.535855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.536080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.536118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.536265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.536297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.536483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.536514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.536707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.536739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.536945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.536985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.537175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.537208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.537321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.537354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.537538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.537570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.537681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.537714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.537840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.537871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.538065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.538097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.538227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.538260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.538378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.538410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.538602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.538635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-11-19 11:39:00.538873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-11-19 11:39:00.538904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.539097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.539131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.539360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.539391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.539522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.539555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.539725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.539755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.539974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.540010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.540122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.540155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.540291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.540323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.540466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.540498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.540758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.540789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.540919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.540960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.541147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.541179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.541351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.541384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.541518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.541548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.541798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.541830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.541965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.541998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.542173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.542205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.542353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.542384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.542564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.542596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.542719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.542750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.542888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.542926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.543158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.543191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.543315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.543347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.543453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.543484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.543661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.543695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.543835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.543866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.544028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.544063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.544355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.544387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.544508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.544539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.544672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.544705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.544824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.544856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.544995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.545027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.545225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.545257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.545502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.545538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.545749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.545782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.545919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.545957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.546198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.546234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.546360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.546391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.546590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-11-19 11:39:00.546621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-11-19 11:39:00.546809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.546842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.546963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.546998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.547133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.547166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.547290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.547320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.547432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.547464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.547732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.547764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.547961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.547994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.548123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.548156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.548355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.548389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.548507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.548539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.548717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.548750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.548943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.548987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.549097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.549129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.549302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.549333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.549459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.549491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.549674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.549705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.549824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.549856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.550036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.550069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.550292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.550325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.550437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.550467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.550580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.550614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.550749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.550781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.550980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.551020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.551129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.551161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.551272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.551304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.551506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.551547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.551680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.551712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.551883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.551914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.552059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.552091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.552203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.552234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.552341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.552373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.552637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.552668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.552912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.552944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.553086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.553117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.553232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.553264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.553391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.553421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.553652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.553685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.553866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.553898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-11-19 11:39:00.554023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-11-19 11:39:00.554055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.554187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.554218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.554428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.554461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.554656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.554688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.554882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.554914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.555065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.555099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.555243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.555275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.555377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.555408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.555582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.555615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.555744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.555775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.555903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.555935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.556132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.556170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.556302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.556334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.556537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.556569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.556692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.556722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.556835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.556867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.556976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.557009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.557272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.557304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.557413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.557444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.557547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.557579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.557764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.557795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.557906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.557938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.558049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.558081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.558197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.558228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.558438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.558470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.558616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.558657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.558788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.558822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.559114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.559148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.559253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.559286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.559465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.559498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.559629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.559661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.559777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.559810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.559984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.560018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.560128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.560159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.560279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.560312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.560422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.560453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.560579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.560611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-11-19 11:39:00.560814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-11-19 11:39:00.560845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.561030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.561071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.561217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.561250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.561488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.561520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.561786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.561817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.561956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.561989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.562166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.562198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.562310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.562342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.562470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.562502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.562623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.562655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.562825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.562856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.562997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.563031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.563140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.563172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.563286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.563318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.563434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.563466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.563580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.563612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.563796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.563827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.563942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.563987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.564197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.564229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.564334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.564367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.564575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.564607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.564786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.564819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.564926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.564968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.565149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.565180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.565301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.565334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.565461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.565493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.565600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.565632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.565749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.565781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5068000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.565919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.565973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.566111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.566143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.566256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.566287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.566461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.566492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.566615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.566647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.566916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.566959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.567155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.567187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-11-19 11:39:00.567380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-11-19 11:39:00.567411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.567532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.567563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.567767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.567798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.568060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.568094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.568224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.568255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.568448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.568478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.568598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.568636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.568854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.568885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.569071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.569103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.569342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.569375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.569604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.569635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.569883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.569914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.570183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.570215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.570403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.570435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.570734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.570765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.570901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.570932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.058 [2024-11-19 11:39:00.571142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.571175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.571321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.571352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:47.058 [2024-11-19 11:39:00.571498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.571532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.571730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.571762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:47.058 [2024-11-19 11:39:00.572001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.572036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:47.058 [2024-11-19 11:39:00.572255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.572288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.058 [2024-11-19 11:39:00.572476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.572509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.572746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.572778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.572959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.572991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.573098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.573129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.573255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.573286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.573462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.573493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.573706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.573737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.573939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.573980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.574231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.574261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.574396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.574427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.574580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.574611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.574816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.574846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.575023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.575056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.575201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-11-19 11:39:00.575234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-11-19 11:39:00.575427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.575457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.575608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.575639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.575895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.575927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.576082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.576115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.576297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.576330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.576514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.576546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.576743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.576774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.576891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.576922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.577073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.577111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.577249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.577279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.577407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.577439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.577561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.577592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.577797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.577829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.577967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.578000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.578203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.578234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.578438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.578469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.578592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.578623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.578758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.578788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.578925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.578967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.579150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.579181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.579465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.579496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.579626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.579658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.579845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.579876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.580015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.580047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.580204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.580236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.580344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.580376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.580508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.580538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.580677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.580708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.580829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.580860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.580996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.581028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.581225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.581258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.581366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.581398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.581533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.581565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.581742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.581772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.581888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.581922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.582067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.582100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.582218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.582249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.582360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-11-19 11:39:00.582392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-11-19 11:39:00.582511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.582542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.582667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.582698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.582817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.582847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.582975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.583008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.583126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.583157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.583294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.583324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.583428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.583459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.583650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.583683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.583825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.583857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.583965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.584000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.584172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.584211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.584329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.584361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.584476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.584508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.584672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.584703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.584819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.584850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.584981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.585014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.585137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.585168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.585275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.585306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.585437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.585468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.585577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.585608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.585719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.585750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.585930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.585968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.586100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.586132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.586305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.586337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.586460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.586491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.586693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.586725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.586904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.586935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.587066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.587098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.587217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.587248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.587392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.587424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.587645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.587677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.587876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.587908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.588038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.588071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.588222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.588252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.588394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.588425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.588561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.588591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.588785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.588817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.588940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-11-19 11:39:00.589003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-11-19 11:39:00.589136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.589165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.589272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.589303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.589430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.589462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.589607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.589638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.589768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.589798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.589935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.589973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.590096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.590123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.590229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.590258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.590369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.590396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.590504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.590533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.590632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.590659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.590849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.590877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.590990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.591026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.591144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.591171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.591294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.591322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.591440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.591468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.591643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.591671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.591868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.591896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.592019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.592050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.592161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.592189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.592307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.592334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.592438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.592466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.592586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.592613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.592724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.592751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.592861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.592889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.593019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.593050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.593169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.593198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.593306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.593336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.593449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.593477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.593587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.593615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.593733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.593762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.593858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.593885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.594054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.594084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-11-19 11:39:00.594193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-11-19 11:39:00.594221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.594387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.594415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.594687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.594717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.594964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.594994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.595101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.595130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.595269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.595298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.595428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.595475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.595692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.595725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.595874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.595907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.596080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.596114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.596310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.596342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.596477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.596510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.596711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.596744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.596871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.596905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.597117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.597152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.597280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.597312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.597458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.597490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.597622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.597654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.597836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.597868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.598085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.598118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.598300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.598332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.598461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.598492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.598780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.598812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.598958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.598992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.599122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.599154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.599303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.599335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.599489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.599521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.599737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.599769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.599965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.599998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.600201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.600233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.600365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.600398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.600533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.600564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.600792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.600824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.601021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.601061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.601278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.601309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.601453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.601485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.601777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.601810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.602051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.602085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-11-19 11:39:00.602287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-11-19 11:39:00.602320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.602435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.602467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.602705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.602736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.603052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.603086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.603227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.603259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.603391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.603423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.603648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.603679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.603964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.603998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.604146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.604178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.604382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.604414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.604557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.604591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.604851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.604882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.605122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.605156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.605334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.605365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.605603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.605635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.605896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.605928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.606109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.606142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.606269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.606302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.606452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.606484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.606615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.606647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.606822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.606854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.607009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.607043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.607332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.607370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.607487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.607519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.607694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.607726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.607930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.607970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.608094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.608126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.063 [2024-11-19 11:39:00.608318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.608352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.608485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.608516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.608648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:47.063 [2024-11-19 11:39:00.608680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.608818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.608850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.608973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.609006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.609193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.609225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.063 [2024-11-19 11:39:00.609356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.609389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.609513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.609551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.609670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.609702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-11-19 11:39:00.609814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-11-19 11:39:00.609844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.609982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.610018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.610138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.610169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.610299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.610330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.610450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.610480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.610619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.610649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.610829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.610860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.610985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.611017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.611137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.611167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.611351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.611383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.611514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.611545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.611662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.611698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.611876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.611909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.612037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.612071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.612196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.612229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.612404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.612436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.612550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.612581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.612700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.612731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.612850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.612882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.612998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.613030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.613161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.613193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.613300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.613331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.613448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.613479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.613665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.613696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.613812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.613842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.614025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.614057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.614177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.614207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.614336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.614367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.614492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.614522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.614699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.614731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.614921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.614960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.615078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.615108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.615219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.615250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.615375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.615406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.615515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.615545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.615660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.615692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.615806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.615836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.615943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.615984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-11-19 11:39:00.616179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-11-19 11:39:00.616212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.616459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.616492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.616603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.616633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.616755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.616785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.616889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.616920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.617040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.617071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.617191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.617223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.617337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.617368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.617538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.617570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.617757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.617789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.617911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.617942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.618059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.618091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.618213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.618245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.618361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.618398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.618530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.618561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.618685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.618717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.618832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.618863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.618973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.619006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.619137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.619170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.619298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.619330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.619435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.619466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.619581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.619614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.619724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.619756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.619862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.619894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.620024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.620054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.620213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.620242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.620349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.620379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.620545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.620574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.620688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.620717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.620832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.620861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.620972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.621002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.621110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.621139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.621320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.621348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.621517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.621545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.621650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.621678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.621790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.621818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.621929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.621966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.622132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.622161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-11-19 11:39:00.622275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-11-19 11:39:00.622304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.622497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.622526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5064000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.622685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.622745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5070000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.622875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.622910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.623120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.623154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.623271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.623301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.623415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.623446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.623560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.623591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.623709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.623740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.623845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.623875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.623996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.624028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.624202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.624233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.624421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.624452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.624584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.624617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.624754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.624785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.624898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.624929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.625145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.625178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.625286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.625318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.625828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.625868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.626073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.626109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.626237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.626271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.626416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.626447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.626562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.626594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.626702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.626733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.626917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.626956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.627139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.627171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.627285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.627316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.627506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.627538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.627669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.627702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.627811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.627850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.627964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.627998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.628131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.628164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.628344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.628375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.628500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.628531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-11-19 11:39:00.628656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-11-19 11:39:00.628687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.628797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.628829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.628965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.628998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.629108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.629140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.629254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.629287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.629564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.629595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.629737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.629769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.629871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.629902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.630022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.630054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.630260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.630293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.630406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.630438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.630562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.630593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.630718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.630750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.630995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.631028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.631209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.631240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.631361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.631393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.631506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.631537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.631646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.631678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.631858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.631891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.632022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.632055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.632232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.632264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.632440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.632472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.632585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.632617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.632740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.632772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.632900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.632933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.633056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.633088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.633197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.633230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.633375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.633410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.633527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.633559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.633685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.633717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.633903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.633936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.634060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.634093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.634217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.634250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.634443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.634475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.634593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.634626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.634796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.634828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.635023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.635059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.635162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.635194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-11-19 11:39:00.635367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-11-19 11:39:00.635398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.635510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.635543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.635657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.635689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.635799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.635830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.635960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.635993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.636120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.636151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.636259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.636290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.636401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.636433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.636542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.636574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.636697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.636729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.636869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.636901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.637047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.637081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.637262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.637295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.637435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.637467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.637693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.637725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.637932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.637976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.638130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.638163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.638288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.638319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.638448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.638482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 Malloc0 00:27:47.068 [2024-11-19 11:39:00.638760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.638793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.639015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.639049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.639196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.639229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.639341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.639373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.639599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.639629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.639749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:47.068 [2024-11-19 11:39:00.639782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.640097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.640128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.068 [2024-11-19 11:39:00.640316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.640346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.068 [2024-11-19 11:39:00.640530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.640561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.640782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.640811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.640913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.640941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.641138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.641168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.641287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.641317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.641457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.641485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.641595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.641624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.641865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.641895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.642045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.642074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.642209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-11-19 11:39:00.642238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-11-19 11:39:00.642369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.642399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.642634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.642663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.642918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.642956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.643084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.643112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.643297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.643326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.643503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.643531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.643714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.643744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.643925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.643962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.644147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.644177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.644359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.644388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.644492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.644520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.644709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.644738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.644924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.644974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.645114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.645141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.645315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.645343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.645481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.645511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.645754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.645782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.646093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.646123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.646175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.069 [2024-11-19 11:39:00.646294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.646324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.646459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.646487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.646734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.646763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.647019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.647051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.647260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.647289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.647420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.647448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.647649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.647679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.647922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.647959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.648101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.648130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.648271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.648301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.648431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.648460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.648573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.648602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.648798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.648829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.649001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.649031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.649140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.649167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.649283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.649312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.649496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.649525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.649810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.649839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.650041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.650083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.650259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-11-19 11:39:00.650290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-11-19 11:39:00.650412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.650442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.650553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.650586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.650828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.650860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.651073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.651107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.651243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.651274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.651410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.651440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.651741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.651773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.652022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.652056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.652198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.652231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.652365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.652396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.652781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.652813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.653000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.653032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.653178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.653209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.653422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.653453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.653592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.653623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.653859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.653889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.654091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.654130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.654327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.654359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.654503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.654534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.654798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.654831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.654944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.654986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.655171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.655203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:47.070 [2024-11-19 11:39:00.655355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.655387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.655532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.655564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.070 [2024-11-19 11:39:00.655745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.655776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.070 [2024-11-19 11:39:00.656002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.656037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.656160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.656191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.656324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.656355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.656493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.656526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.656808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.656840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.657028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.657060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.657212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-11-19 11:39:00.657244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-11-19 11:39:00.657382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.657413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.657618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.657649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.657870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.657903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.658094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.658126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.658258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.658290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.658435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.658468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.658684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.658715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.658902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.658933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.659088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.659120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.659254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.659286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.659410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.659442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.659701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.659735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.659998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.660031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.660249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.660280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.660459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.660491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.660773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.660806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.660994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.661027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.661170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.661202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.661382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.661414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.661622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.661653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.661842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.661874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.662006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.662039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.662227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.662259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.662419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.662451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.662802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.662833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.071 [2024-11-19 11:39:00.663003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.663036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.663174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.663205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:47.071 [2024-11-19 11:39:00.663394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.663426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.071 [2024-11-19 11:39:00.663668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.663701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.663887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.663919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.664116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.664150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.664293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.664325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.664469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.664502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.664713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.664745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.664939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.665004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.665164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.665196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.665350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-11-19 11:39:00.665382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-11-19 11:39:00.665507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.665538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.665803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.665836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.665966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.666000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.666242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.666275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.666409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.666440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.666630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.666662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.666901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.666933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.667188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.667220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.667400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.667431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.667638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.667669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.667942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.667985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.668260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.668291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.668439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.668471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.668771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.668803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.669087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.669119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.669253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.669284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.669461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.669493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.669707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.669738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.669940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.669982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.670200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.670231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.670376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.670407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.670615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.670646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.670850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.670882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.072 addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.671150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.671182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.072 [2024-11-19 11:39:00.671299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.671336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.671476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.671506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.072 [2024-11-19 11:39:00.671723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.671754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.072 [2024-11-19 11:39:00.672021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.672054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.672201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.672233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.672371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.672401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.672670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.672700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.672979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.673013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.673159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.673191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.673323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.673356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.673534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.673565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.673850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.673882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-11-19 11:39:00.674031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-11-19 11:39:00.674064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.073 [2024-11-19 11:39:00.674224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-11-19 11:39:00.674254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadaba0 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-11-19 11:39:00.674422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.073 [2024-11-19 11:39:00.676849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.676977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.677026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 [2024-11-19 11:39:00.677049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.677071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.073 [2024-11-19 11:39:00.677122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.073 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:47.073 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.073 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.073 [2024-11-19 11:39:00.686778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.686880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.686916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.073 [2024-11-19 11:39:00.686934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.686962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.073 [2024-11-19 11:39:00.687007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 11:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2421591 00:27:47.073 [2024-11-19 11:39:00.696767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.696866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.696889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 [2024-11-19 11:39:00.696902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.696913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.073 [2024-11-19 11:39:00.696939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-11-19 11:39:00.706700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.706765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.706788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 [2024-11-19 11:39:00.706796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.706804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.073 [2024-11-19 11:39:00.706821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-11-19 11:39:00.716701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.716761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.716776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 [2024-11-19 11:39:00.716783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.716788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.073 [2024-11-19 11:39:00.716803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-11-19 11:39:00.726769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.726824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.726840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 [2024-11-19 11:39:00.726848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.726855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.073 [2024-11-19 11:39:00.726870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-11-19 11:39:00.736793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.736845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.736860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 [2024-11-19 11:39:00.736866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.736873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.073 [2024-11-19 11:39:00.736887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-11-19 11:39:00.746819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.746875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.746893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 [2024-11-19 11:39:00.746900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.746906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.073 [2024-11-19 11:39:00.746921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-11-19 11:39:00.756845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.756905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.756920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 [2024-11-19 11:39:00.756927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.756933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.073 [2024-11-19 11:39:00.756953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-11-19 11:39:00.766854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.766913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.766927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 [2024-11-19 11:39:00.766934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.766940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.073 [2024-11-19 11:39:00.766958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-11-19 11:39:00.776960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.777061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.777076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 [2024-11-19 11:39:00.777082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.777088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.073 [2024-11-19 11:39:00.777103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-11-19 11:39:00.786860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.073 [2024-11-19 11:39:00.786921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.073 [2024-11-19 11:39:00.786935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.073 [2024-11-19 11:39:00.786941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.073 [2024-11-19 11:39:00.786955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.074 [2024-11-19 11:39:00.786970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-11-19 11:39:00.796970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.074 [2024-11-19 11:39:00.797028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.074 [2024-11-19 11:39:00.797042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.074 [2024-11-19 11:39:00.797048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.074 [2024-11-19 11:39:00.797054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.074 [2024-11-19 11:39:00.797069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-11-19 11:39:00.807006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.074 [2024-11-19 11:39:00.807067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.074 [2024-11-19 11:39:00.807081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.074 [2024-11-19 11:39:00.807087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.074 [2024-11-19 11:39:00.807094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.074 [2024-11-19 11:39:00.807109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.334 [2024-11-19 11:39:00.817020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.334 [2024-11-19 11:39:00.817077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.334 [2024-11-19 11:39:00.817092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.334 [2024-11-19 11:39:00.817100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.334 [2024-11-19 11:39:00.817106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.334 [2024-11-19 11:39:00.817121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.334 qpair failed and we were unable to recover it. 00:27:47.334 [2024-11-19 11:39:00.827053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.334 [2024-11-19 11:39:00.827110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.827124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.827131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.827137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.827153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.837100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.837176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.837191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.837197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.837203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.837217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.847045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.847098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.847111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.847118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.847125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.847139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.857145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.857219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.857234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.857240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.857246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.857261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.867210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.867269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.867284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.867291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.867297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.867313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.877127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.877184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.877201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.877208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.877214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.877229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.887238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.887292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.887306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.887313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.887319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.887333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.897288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.897345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.897359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.897367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.897373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.897388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.907283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.907343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.907357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.907364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.907370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.907384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.917324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.917378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.917392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.917399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.917408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.917422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.927378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.927433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.927448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.927455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.927461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.927476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.937348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.937401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.937415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.937423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.937429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.937444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.947408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.335 [2024-11-19 11:39:00.947469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.335 [2024-11-19 11:39:00.947483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.335 [2024-11-19 11:39:00.947490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.335 [2024-11-19 11:39:00.947497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.335 [2024-11-19 11:39:00.947511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.335 qpair failed and we were unable to recover it. 00:27:47.335 [2024-11-19 11:39:00.957369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:00.957454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:00.957469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:00.957476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:00.957482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:00.957497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:00.967413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:00.967468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:00.967482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:00.967489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:00.967496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:00.967510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:00.977482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:00.977534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:00.977548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:00.977554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:00.977560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:00.977574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:00.987480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:00.987564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:00.987577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:00.987584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:00.987590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:00.987604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:00.997551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:00.997634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:00.997647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:00.997654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:00.997660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:00.997674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:01.007540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:01.007594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:01.007614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:01.007621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:01.007627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:01.007642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:01.017518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:01.017579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:01.017593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:01.017600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:01.017606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:01.017621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:01.027591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:01.027651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:01.027666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:01.027673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:01.027679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:01.027694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:01.037683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:01.037742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:01.037756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:01.037763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:01.037769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:01.037783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:01.047684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:01.047739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:01.047753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:01.047759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:01.047768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:01.047783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:01.057700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:01.057756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:01.057770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:01.057776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:01.057782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:01.057797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:01.067766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:01.067850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:01.067864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:01.067871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:01.067876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:01.067891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:01.077691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:01.077753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.336 [2024-11-19 11:39:01.077767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.336 [2024-11-19 11:39:01.077774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.336 [2024-11-19 11:39:01.077780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.336 [2024-11-19 11:39:01.077795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.336 qpair failed and we were unable to recover it. 00:27:47.336 [2024-11-19 11:39:01.087716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.336 [2024-11-19 11:39:01.087778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.337 [2024-11-19 11:39:01.087793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.337 [2024-11-19 11:39:01.087800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.337 [2024-11-19 11:39:01.087806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.337 [2024-11-19 11:39:01.087820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.337 qpair failed and we were unable to recover it. 00:27:47.337 [2024-11-19 11:39:01.097851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.337 [2024-11-19 11:39:01.097900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.337 [2024-11-19 11:39:01.097914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.337 [2024-11-19 11:39:01.097921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.337 [2024-11-19 11:39:01.097926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.337 [2024-11-19 11:39:01.097941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.337 qpair failed and we were unable to recover it. 00:27:47.337 [2024-11-19 11:39:01.107895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.337 [2024-11-19 11:39:01.107997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.337 [2024-11-19 11:39:01.108011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.337 [2024-11-19 11:39:01.108017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.337 [2024-11-19 11:39:01.108023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.337 [2024-11-19 11:39:01.108038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.337 qpair failed and we were unable to recover it. 00:27:47.597 [2024-11-19 11:39:01.117862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.597 [2024-11-19 11:39:01.117916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.597 [2024-11-19 11:39:01.117930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.597 [2024-11-19 11:39:01.117936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.597 [2024-11-19 11:39:01.117943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.597 [2024-11-19 11:39:01.117964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.597 qpair failed and we were unable to recover it. 00:27:47.597 [2024-11-19 11:39:01.127904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.597 [2024-11-19 11:39:01.127962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.597 [2024-11-19 11:39:01.127978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.597 [2024-11-19 11:39:01.127985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.597 [2024-11-19 11:39:01.127991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.128006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.137936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.137991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.138008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.138015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.138022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.138036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.147892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.147964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.147979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.147986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.147992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.148007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.158007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.158064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.158078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.158086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.158092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.158107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.168061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.168120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.168133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.168140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.168147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.168161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.178049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.178099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.178113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.178120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.178130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.178144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.188109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.188172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.188186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.188192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.188198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.188212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.198152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.198215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.198230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.198236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.198243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.198257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.208183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.208236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.208251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.208257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.208263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.208278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.218187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.218245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.218259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.218266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.218272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.218286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.228181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.228274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.228288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.228295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.228301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.228316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.238269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.238326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.238340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.238347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.238353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.238367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.248250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.248302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.248316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.248323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.248329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.598 [2024-11-19 11:39:01.248344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.598 qpair failed and we were unable to recover it. 00:27:47.598 [2024-11-19 11:39:01.258310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.598 [2024-11-19 11:39:01.258366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.598 [2024-11-19 11:39:01.258380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.598 [2024-11-19 11:39:01.258387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.598 [2024-11-19 11:39:01.258394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.258408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.599 [2024-11-19 11:39:01.268332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.599 [2024-11-19 11:39:01.268438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.599 [2024-11-19 11:39:01.268455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.599 [2024-11-19 11:39:01.268462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.599 [2024-11-19 11:39:01.268468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.268483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.599 [2024-11-19 11:39:01.278286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.599 [2024-11-19 11:39:01.278342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.599 [2024-11-19 11:39:01.278356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.599 [2024-11-19 11:39:01.278363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.599 [2024-11-19 11:39:01.278369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.278383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.599 [2024-11-19 11:39:01.288310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.599 [2024-11-19 11:39:01.288361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.599 [2024-11-19 11:39:01.288376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.599 [2024-11-19 11:39:01.288382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.599 [2024-11-19 11:39:01.288389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.288402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.599 [2024-11-19 11:39:01.298331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.599 [2024-11-19 11:39:01.298422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.599 [2024-11-19 11:39:01.298436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.599 [2024-11-19 11:39:01.298443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.599 [2024-11-19 11:39:01.298449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.298463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.599 [2024-11-19 11:39:01.308425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.599 [2024-11-19 11:39:01.308487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.599 [2024-11-19 11:39:01.308503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.599 [2024-11-19 11:39:01.308510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.599 [2024-11-19 11:39:01.308520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.308535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.599 [2024-11-19 11:39:01.318481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.599 [2024-11-19 11:39:01.318571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.599 [2024-11-19 11:39:01.318586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.599 [2024-11-19 11:39:01.318593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.599 [2024-11-19 11:39:01.318599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.318614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.599 [2024-11-19 11:39:01.328502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.599 [2024-11-19 11:39:01.328555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.599 [2024-11-19 11:39:01.328569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.599 [2024-11-19 11:39:01.328575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.599 [2024-11-19 11:39:01.328582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.328596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.599 [2024-11-19 11:39:01.338489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.599 [2024-11-19 11:39:01.338543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.599 [2024-11-19 11:39:01.338557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.599 [2024-11-19 11:39:01.338564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.599 [2024-11-19 11:39:01.338569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.338584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.599 [2024-11-19 11:39:01.348507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.599 [2024-11-19 11:39:01.348564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.599 [2024-11-19 11:39:01.348580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.599 [2024-11-19 11:39:01.348586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.599 [2024-11-19 11:39:01.348592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.348606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.599 [2024-11-19 11:39:01.358483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.599 [2024-11-19 11:39:01.358539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.599 [2024-11-19 11:39:01.358553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.599 [2024-11-19 11:39:01.358560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.599 [2024-11-19 11:39:01.358566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.358580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.599 [2024-11-19 11:39:01.368585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.599 [2024-11-19 11:39:01.368641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.599 [2024-11-19 11:39:01.368656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.599 [2024-11-19 11:39:01.368662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.599 [2024-11-19 11:39:01.368669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.599 [2024-11-19 11:39:01.368683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.599 qpair failed and we were unable to recover it. 00:27:47.859 [2024-11-19 11:39:01.378548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.859 [2024-11-19 11:39:01.378602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.859 [2024-11-19 11:39:01.378617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.859 [2024-11-19 11:39:01.378624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.859 [2024-11-19 11:39:01.378630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.378644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.388639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.388719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.388735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.388742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.388748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.388762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.398601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.398659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.398677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.398684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.398690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.398705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.408711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.408763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.408777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.408784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.408790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.408806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.418754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.418805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.418818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.418825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.418831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.418846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.428806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.428863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.428878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.428885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.428891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.428906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.438768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.438822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.438836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.438842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.438852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.438867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.448816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.448872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.448887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.448894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.448900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.448915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.458822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.458880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.458894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.458901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.458907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.458922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.468873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.468931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.468946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.468958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.468964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.468979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.479096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.479179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.479194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.479201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.479207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.479222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.488970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.489020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.489034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.489040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.489046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.489060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.498996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.499052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.499066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.499073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.860 [2024-11-19 11:39:01.499079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.860 [2024-11-19 11:39:01.499093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.860 qpair failed and we were unable to recover it. 00:27:47.860 [2024-11-19 11:39:01.509049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.860 [2024-11-19 11:39:01.509110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.860 [2024-11-19 11:39:01.509125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.860 [2024-11-19 11:39:01.509131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.509137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.509153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.519036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.519088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.519102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.519109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.519115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.519129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.529024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.529074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.529091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.529098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.529105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.529120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.539056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.539111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.539126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.539133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.539139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.539153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.549132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.549196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.549210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.549217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.549223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.549239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.559126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.559213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.559227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.559233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.559239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.559254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.569156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.569212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.569226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.569232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.569242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.569257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.579179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.579273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.579287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.579294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.579300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.579315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.589212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.589268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.589283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.589289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.589295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.589310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.599253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.599310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.599325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.599332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.599338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.599352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.609197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.609251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.609265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.609272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.609278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.609293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.619301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.619361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.619376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.619382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.619388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.619403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:47.861 [2024-11-19 11:39:01.629332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.861 [2024-11-19 11:39:01.629389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.861 [2024-11-19 11:39:01.629406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.861 [2024-11-19 11:39:01.629413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.861 [2024-11-19 11:39:01.629419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:47.861 [2024-11-19 11:39:01.629435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.861 qpair failed and we were unable to recover it. 00:27:48.122 [2024-11-19 11:39:01.639363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.639421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.639436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.639443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.639450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.639464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.649438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.649490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.649505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.649512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.649518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.649532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.659407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.659463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.659484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.659491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.659496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.659511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.669459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.669527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.669542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.669549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.669555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.669570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.679527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.679631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.679645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.679652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.679658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.679673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.689476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.689532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.689545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.689553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.689559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.689574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.699519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.699576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.699590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.699596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.699606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.699620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.709555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.709611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.709626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.709632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.709638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.709652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.719525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.719586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.719601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.719608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.719614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.719628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.729540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.729600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.729614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.729620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.729626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.729640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.739668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.739727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.739742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.739749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.739755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.739769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.749615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.749674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.123 [2024-11-19 11:39:01.749691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.123 [2024-11-19 11:39:01.749698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.123 [2024-11-19 11:39:01.749706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.123 [2024-11-19 11:39:01.749721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.123 qpair failed and we were unable to recover it. 00:27:48.123 [2024-11-19 11:39:01.759633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.123 [2024-11-19 11:39:01.759686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.759701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.759708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.759714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.759729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.769732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.769791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.769805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.769811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.769817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.769832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.779678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.779735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.779750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.779757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.779763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.779778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.789792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.789846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.789863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.789870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.789876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.789890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.799868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.799968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.799983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.799989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.799995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.800010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.809909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.809980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.809995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.810001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.810007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.810023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.819844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.819900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.819914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.819920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.819927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.819941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.829923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.829985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.830000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.830007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.830017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.830032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.839862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.839916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.839930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.839936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.839943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.839963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.849970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.850028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.850042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.850049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.850055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.850069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.860013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.860070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.860084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.860090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.860096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.860110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.870046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.870104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.870118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.870125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.870131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.870146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.880044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.880101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.124 [2024-11-19 11:39:01.880116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.124 [2024-11-19 11:39:01.880122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.124 [2024-11-19 11:39:01.880128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.124 [2024-11-19 11:39:01.880143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.124 qpair failed and we were unable to recover it. 00:27:48.124 [2024-11-19 11:39:01.890084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.124 [2024-11-19 11:39:01.890155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.125 [2024-11-19 11:39:01.890170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.125 [2024-11-19 11:39:01.890176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.125 [2024-11-19 11:39:01.890182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.125 [2024-11-19 11:39:01.890197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.125 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:01.900033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:01.900124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:01.900138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:01.900145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:01.900151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:01.900165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:01.910117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:01.910176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:01.910190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:01.910197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:01.910203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:01.910217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:01.920143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:01.920199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:01.920217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:01.920224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:01.920229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:01.920244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:01.930149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:01.930243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:01.930258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:01.930265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:01.930271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:01.930285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:01.940155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:01.940210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:01.940224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:01.940231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:01.940237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:01.940251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:01.950249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:01.950307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:01.950321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:01.950327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:01.950333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:01.950347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:01.960271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:01.960327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:01.960341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:01.960347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:01.960357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:01.960371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:01.970296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:01.970349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:01.970363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:01.970369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:01.970375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:01.970389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:01.980258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:01.980312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:01.980326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:01.980332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:01.980339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:01.980353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:01.990286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:01.990342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:01.990356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:01.990363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:01.990369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:01.990383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:02.000392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:02.000446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:02.000460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:02.000467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:02.000473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:02.000490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:02.010419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.387 [2024-11-19 11:39:02.010474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.387 [2024-11-19 11:39:02.010489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.387 [2024-11-19 11:39:02.010496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.387 [2024-11-19 11:39:02.010502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.387 [2024-11-19 11:39:02.010516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.387 qpair failed and we were unable to recover it. 00:27:48.387 [2024-11-19 11:39:02.020475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.020527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.020541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.020547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.020554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.020568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.030532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.030589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.030603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.030610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.030616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.030631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.040546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.040605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.040619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.040626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.040632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.040647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.050543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.050600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.050618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.050625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.050631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.050646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.060555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.060608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.060622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.060628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.060635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.060649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.070554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.070620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.070634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.070641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.070647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.070661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.080650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.080705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.080719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.080726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.080732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.080746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.090669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.090718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.090733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.090739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.090749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.090763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.100643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.100699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.100713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.100720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.100725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.100740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.110706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.110764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.110778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.110784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.110790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.110805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.120746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.120801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.120816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.120822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.120828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.120842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.130784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.130840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.130855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.130862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.130868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.130882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.140772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.388 [2024-11-19 11:39:02.140827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.388 [2024-11-19 11:39:02.140841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.388 [2024-11-19 11:39:02.140848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.388 [2024-11-19 11:39:02.140854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.388 [2024-11-19 11:39:02.140868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.388 qpair failed and we were unable to recover it. 00:27:48.388 [2024-11-19 11:39:02.150854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.389 [2024-11-19 11:39:02.150911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.389 [2024-11-19 11:39:02.150926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.389 [2024-11-19 11:39:02.150933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.389 [2024-11-19 11:39:02.150939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.389 [2024-11-19 11:39:02.150958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.389 qpair failed and we were unable to recover it. 00:27:48.389 [2024-11-19 11:39:02.160758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.389 [2024-11-19 11:39:02.160813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.389 [2024-11-19 11:39:02.160827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.389 [2024-11-19 11:39:02.160834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.389 [2024-11-19 11:39:02.160839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.389 [2024-11-19 11:39:02.160854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.389 qpair failed and we were unable to recover it. 00:27:48.650 [2024-11-19 11:39:02.170860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.650 [2024-11-19 11:39:02.170917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.650 [2024-11-19 11:39:02.170931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.650 [2024-11-19 11:39:02.170938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.650 [2024-11-19 11:39:02.170945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.650 [2024-11-19 11:39:02.170964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-11-19 11:39:02.180920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.650 [2024-11-19 11:39:02.180977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.650 [2024-11-19 11:39:02.180994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.650 [2024-11-19 11:39:02.181001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.650 [2024-11-19 11:39:02.181007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.650 [2024-11-19 11:39:02.181022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-11-19 11:39:02.190924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.650 [2024-11-19 11:39:02.190987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.650 [2024-11-19 11:39:02.191001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.650 [2024-11-19 11:39:02.191008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.650 [2024-11-19 11:39:02.191014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.650 [2024-11-19 11:39:02.191029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-11-19 11:39:02.200970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.650 [2024-11-19 11:39:02.201028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.650 [2024-11-19 11:39:02.201042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.650 [2024-11-19 11:39:02.201048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.650 [2024-11-19 11:39:02.201054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.650 [2024-11-19 11:39:02.201069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-11-19 11:39:02.210982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.650 [2024-11-19 11:39:02.211039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.650 [2024-11-19 11:39:02.211053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.650 [2024-11-19 11:39:02.211060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.650 [2024-11-19 11:39:02.211066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.650 [2024-11-19 11:39:02.211080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-11-19 11:39:02.220983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.650 [2024-11-19 11:39:02.221043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.650 [2024-11-19 11:39:02.221057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.650 [2024-11-19 11:39:02.221063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.221072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.221086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.231034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.231093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.231107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.231115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.231121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.231136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.241092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.241151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.241166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.241173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.241180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.241194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.251057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.251117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.251131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.251137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.251144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.251158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.261129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.261184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.261199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.261205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.261212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.261227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.271104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.271187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.271201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.271208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.271214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.271229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.281160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.281215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.281229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.281235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.281241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.281256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.291249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.291301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.291317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.291324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.291331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.291345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.301266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.301346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.301361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.301367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.301373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.301388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.311249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.311307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.311327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.311334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.311340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.311355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.321284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.321336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.321349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.321355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.321361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.321375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.331326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.331380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.331395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.331402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.331408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.331422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.341339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.341391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.341405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.341412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.341418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.651 [2024-11-19 11:39:02.341432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-11-19 11:39:02.351392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.651 [2024-11-19 11:39:02.351452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.651 [2024-11-19 11:39:02.351467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.651 [2024-11-19 11:39:02.351473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.651 [2024-11-19 11:39:02.351483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.652 [2024-11-19 11:39:02.351498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-11-19 11:39:02.361404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.652 [2024-11-19 11:39:02.361459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.652 [2024-11-19 11:39:02.361474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.652 [2024-11-19 11:39:02.361480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.652 [2024-11-19 11:39:02.361486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.652 [2024-11-19 11:39:02.361500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-11-19 11:39:02.371426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.652 [2024-11-19 11:39:02.371484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.652 [2024-11-19 11:39:02.371498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.652 [2024-11-19 11:39:02.371505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.652 [2024-11-19 11:39:02.371511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.652 [2024-11-19 11:39:02.371525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-11-19 11:39:02.381471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.652 [2024-11-19 11:39:02.381528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.652 [2024-11-19 11:39:02.381542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.652 [2024-11-19 11:39:02.381549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.652 [2024-11-19 11:39:02.381555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.652 [2024-11-19 11:39:02.381569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-11-19 11:39:02.391464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.652 [2024-11-19 11:39:02.391549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.652 [2024-11-19 11:39:02.391563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.652 [2024-11-19 11:39:02.391570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.652 [2024-11-19 11:39:02.391576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.652 [2024-11-19 11:39:02.391590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-11-19 11:39:02.401524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.652 [2024-11-19 11:39:02.401587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.652 [2024-11-19 11:39:02.401601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.652 [2024-11-19 11:39:02.401608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.652 [2024-11-19 11:39:02.401614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.652 [2024-11-19 11:39:02.401628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-11-19 11:39:02.411542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.652 [2024-11-19 11:39:02.411616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.652 [2024-11-19 11:39:02.411629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.652 [2024-11-19 11:39:02.411637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.652 [2024-11-19 11:39:02.411643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.652 [2024-11-19 11:39:02.411656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-11-19 11:39:02.421571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.652 [2024-11-19 11:39:02.421627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.652 [2024-11-19 11:39:02.421641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.652 [2024-11-19 11:39:02.421648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.652 [2024-11-19 11:39:02.421654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.652 [2024-11-19 11:39:02.421668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.914 [2024-11-19 11:39:02.431667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.914 [2024-11-19 11:39:02.431763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.914 [2024-11-19 11:39:02.431778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.914 [2024-11-19 11:39:02.431784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.914 [2024-11-19 11:39:02.431791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.914 [2024-11-19 11:39:02.431805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.914 qpair failed and we were unable to recover it. 00:27:48.914 [2024-11-19 11:39:02.441660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.914 [2024-11-19 11:39:02.441722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.914 [2024-11-19 11:39:02.441740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.914 [2024-11-19 11:39:02.441746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.914 [2024-11-19 11:39:02.441752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.914 [2024-11-19 11:39:02.441766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.914 qpair failed and we were unable to recover it. 00:27:48.914 [2024-11-19 11:39:02.451678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.914 [2024-11-19 11:39:02.451733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.914 [2024-11-19 11:39:02.451747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.914 [2024-11-19 11:39:02.451754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.914 [2024-11-19 11:39:02.451760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.914 [2024-11-19 11:39:02.451775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.914 qpair failed and we were unable to recover it. 00:27:48.914 [2024-11-19 11:39:02.461720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.914 [2024-11-19 11:39:02.461777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.914 [2024-11-19 11:39:02.461792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.914 [2024-11-19 11:39:02.461799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.914 [2024-11-19 11:39:02.461805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.914 [2024-11-19 11:39:02.461819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.914 qpair failed and we were unable to recover it. 00:27:48.914 [2024-11-19 11:39:02.471694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.914 [2024-11-19 11:39:02.471752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.914 [2024-11-19 11:39:02.471767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.914 [2024-11-19 11:39:02.471774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.914 [2024-11-19 11:39:02.471780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.914 [2024-11-19 11:39:02.471794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.914 qpair failed and we were unable to recover it. 00:27:48.914 [2024-11-19 11:39:02.481752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.914 [2024-11-19 11:39:02.481811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.914 [2024-11-19 11:39:02.481826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.914 [2024-11-19 11:39:02.481832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.914 [2024-11-19 11:39:02.481842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.914 [2024-11-19 11:39:02.481856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.914 qpair failed and we were unable to recover it. 00:27:48.914 [2024-11-19 11:39:02.491781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.914 [2024-11-19 11:39:02.491837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.914 [2024-11-19 11:39:02.491851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.914 [2024-11-19 11:39:02.491858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.914 [2024-11-19 11:39:02.491864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.914 [2024-11-19 11:39:02.491878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.914 qpair failed and we were unable to recover it. 00:27:48.914 [2024-11-19 11:39:02.501801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.914 [2024-11-19 11:39:02.501859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.914 [2024-11-19 11:39:02.501874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.914 [2024-11-19 11:39:02.501881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.914 [2024-11-19 11:39:02.501887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.914 [2024-11-19 11:39:02.501901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.914 qpair failed and we were unable to recover it. 00:27:48.914 [2024-11-19 11:39:02.511846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.914 [2024-11-19 11:39:02.511901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.914 [2024-11-19 11:39:02.511915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.914 [2024-11-19 11:39:02.511921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.914 [2024-11-19 11:39:02.511928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.914 [2024-11-19 11:39:02.511942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.914 qpair failed and we were unable to recover it. 00:27:48.914 [2024-11-19 11:39:02.521882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.914 [2024-11-19 11:39:02.521941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.914 [2024-11-19 11:39:02.521962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.914 [2024-11-19 11:39:02.521970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.914 [2024-11-19 11:39:02.521976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.914 [2024-11-19 11:39:02.521992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.914 qpair failed and we were unable to recover it. 00:27:48.914 [2024-11-19 11:39:02.531891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.914 [2024-11-19 11:39:02.531951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.531967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.531974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.531980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.531995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.541972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.542069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.542084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.542090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.542096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.542111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.551910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.552001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.552015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.552022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.552028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.552042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.561981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.562036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.562050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.562057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.562063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.562077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.571995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.572047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.572064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.572070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.572077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.572091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.582020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.582081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.582095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.582102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.582107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.582122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.592037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.592107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.592122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.592129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.592135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.592149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.602131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.602229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.602243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.602249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.602255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.602270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.612105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.612163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.612177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.612184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.612193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.612208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.622132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.622185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.622199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.622205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.622212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.622226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.632174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.632232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.632249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.632256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.632262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.632277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.642207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.642284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.642298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.642304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.642310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.642325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.652195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.652253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.652268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.915 [2024-11-19 11:39:02.652274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.915 [2024-11-19 11:39:02.652280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.915 [2024-11-19 11:39:02.652295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.915 qpair failed and we were unable to recover it. 00:27:48.915 [2024-11-19 11:39:02.662268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.915 [2024-11-19 11:39:02.662326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.915 [2024-11-19 11:39:02.662340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.916 [2024-11-19 11:39:02.662347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.916 [2024-11-19 11:39:02.662353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.916 [2024-11-19 11:39:02.662368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.916 qpair failed and we were unable to recover it. 00:27:48.916 [2024-11-19 11:39:02.672310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.916 [2024-11-19 11:39:02.672379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.916 [2024-11-19 11:39:02.672393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.916 [2024-11-19 11:39:02.672400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.916 [2024-11-19 11:39:02.672406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.916 [2024-11-19 11:39:02.672420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.916 qpair failed and we were unable to recover it. 00:27:48.916 [2024-11-19 11:39:02.682331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.916 [2024-11-19 11:39:02.682395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.916 [2024-11-19 11:39:02.682410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.916 [2024-11-19 11:39:02.682416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.916 [2024-11-19 11:39:02.682422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:48.916 [2024-11-19 11:39:02.682435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.916 qpair failed and we were unable to recover it. 00:27:49.178 [2024-11-19 11:39:02.692338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.178 [2024-11-19 11:39:02.692392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.178 [2024-11-19 11:39:02.692406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.178 [2024-11-19 11:39:02.692413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.178 [2024-11-19 11:39:02.692419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.178 [2024-11-19 11:39:02.692433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.178 qpair failed and we were unable to recover it. 00:27:49.178 [2024-11-19 11:39:02.702391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.178 [2024-11-19 11:39:02.702457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.178 [2024-11-19 11:39:02.702475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.178 [2024-11-19 11:39:02.702482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.178 [2024-11-19 11:39:02.702488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.178 [2024-11-19 11:39:02.702503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.178 qpair failed and we were unable to recover it. 00:27:49.178 [2024-11-19 11:39:02.712407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.178 [2024-11-19 11:39:02.712465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.178 [2024-11-19 11:39:02.712479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.178 [2024-11-19 11:39:02.712486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.178 [2024-11-19 11:39:02.712492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.178 [2024-11-19 11:39:02.712506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.178 qpair failed and we were unable to recover it. 00:27:49.178 [2024-11-19 11:39:02.722430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.178 [2024-11-19 11:39:02.722487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.178 [2024-11-19 11:39:02.722501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.178 [2024-11-19 11:39:02.722508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.178 [2024-11-19 11:39:02.722514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.178 [2024-11-19 11:39:02.722529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.178 qpair failed and we were unable to recover it. 00:27:49.178 [2024-11-19 11:39:02.732481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.178 [2024-11-19 11:39:02.732564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.178 [2024-11-19 11:39:02.732579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.178 [2024-11-19 11:39:02.732585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.178 [2024-11-19 11:39:02.732591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.178 [2024-11-19 11:39:02.732605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.178 qpair failed and we were unable to recover it. 00:27:49.178 [2024-11-19 11:39:02.742423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.178 [2024-11-19 11:39:02.742477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.178 [2024-11-19 11:39:02.742492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.178 [2024-11-19 11:39:02.742498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.178 [2024-11-19 11:39:02.742508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.178 [2024-11-19 11:39:02.742523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.178 qpair failed and we were unable to recover it. 00:27:49.178 [2024-11-19 11:39:02.752502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.178 [2024-11-19 11:39:02.752594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.178 [2024-11-19 11:39:02.752608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.178 [2024-11-19 11:39:02.752615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.178 [2024-11-19 11:39:02.752620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.178 [2024-11-19 11:39:02.752635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.178 qpair failed and we were unable to recover it. 00:27:49.178 [2024-11-19 11:39:02.762561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.178 [2024-11-19 11:39:02.762614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.178 [2024-11-19 11:39:02.762628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.178 [2024-11-19 11:39:02.762634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.178 [2024-11-19 11:39:02.762640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.178 [2024-11-19 11:39:02.762654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.178 qpair failed and we were unable to recover it. 00:27:49.178 [2024-11-19 11:39:02.772607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.178 [2024-11-19 11:39:02.772661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.178 [2024-11-19 11:39:02.772675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.178 [2024-11-19 11:39:02.772682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.178 [2024-11-19 11:39:02.772688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.178 [2024-11-19 11:39:02.772702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.178 qpair failed and we were unable to recover it. 00:27:49.178 [2024-11-19 11:39:02.782572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.178 [2024-11-19 11:39:02.782637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.782651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.782657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.782663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.782677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.792643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.792702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.792715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.792722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.792727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.792741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.802628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.802721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.802735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.802742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.802747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.802762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.812638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.812729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.812742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.812749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.812755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.812769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.822714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.822765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.822779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.822785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.822791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.822805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.832803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.832874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.832893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.832899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.832905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.832920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.842814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.842873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.842887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.842894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.842900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.842914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.852794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.852868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.852882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.852889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.852895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.852909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.862833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.862914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.862928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.862935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.862941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.862960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.872793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.872850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.872864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.872871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.872880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.872894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.882899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.882956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.882970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.882977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.882983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.882998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.892935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.892993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.893007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.893014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.893019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.893034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.902938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.902993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.179 [2024-11-19 11:39:02.903007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.179 [2024-11-19 11:39:02.903014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.179 [2024-11-19 11:39:02.903020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.179 [2024-11-19 11:39:02.903034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.179 qpair failed and we were unable to recover it. 00:27:49.179 [2024-11-19 11:39:02.912988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.179 [2024-11-19 11:39:02.913079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.180 [2024-11-19 11:39:02.913093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.180 [2024-11-19 11:39:02.913099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.180 [2024-11-19 11:39:02.913106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.180 [2024-11-19 11:39:02.913120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.180 qpair failed and we were unable to recover it. 00:27:49.180 [2024-11-19 11:39:02.923051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.180 [2024-11-19 11:39:02.923104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.180 [2024-11-19 11:39:02.923118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.180 [2024-11-19 11:39:02.923124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.180 [2024-11-19 11:39:02.923131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.180 [2024-11-19 11:39:02.923145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.180 qpair failed and we were unable to recover it. 00:27:49.180 [2024-11-19 11:39:02.933031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.180 [2024-11-19 11:39:02.933086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.180 [2024-11-19 11:39:02.933099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.180 [2024-11-19 11:39:02.933106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.180 [2024-11-19 11:39:02.933112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.180 [2024-11-19 11:39:02.933126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.180 qpair failed and we were unable to recover it. 00:27:49.180 [2024-11-19 11:39:02.943059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.180 [2024-11-19 11:39:02.943113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.180 [2024-11-19 11:39:02.943126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.180 [2024-11-19 11:39:02.943133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.180 [2024-11-19 11:39:02.943139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.180 [2024-11-19 11:39:02.943153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.180 qpair failed and we were unable to recover it. 00:27:49.180 [2024-11-19 11:39:02.953093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.180 [2024-11-19 11:39:02.953151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.180 [2024-11-19 11:39:02.953164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.180 [2024-11-19 11:39:02.953171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.180 [2024-11-19 11:39:02.953177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.180 [2024-11-19 11:39:02.953192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.180 qpair failed and we were unable to recover it. 00:27:49.441 [2024-11-19 11:39:02.963117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.441 [2024-11-19 11:39:02.963219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.441 [2024-11-19 11:39:02.963240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.441 [2024-11-19 11:39:02.963247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.441 [2024-11-19 11:39:02.963253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.441 [2024-11-19 11:39:02.963267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.441 qpair failed and we were unable to recover it. 00:27:49.441 [2024-11-19 11:39:02.973152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.441 [2024-11-19 11:39:02.973207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.441 [2024-11-19 11:39:02.973221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.441 [2024-11-19 11:39:02.973227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.441 [2024-11-19 11:39:02.973233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.441 [2024-11-19 11:39:02.973248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.441 qpair failed and we were unable to recover it. 00:27:49.441 [2024-11-19 11:39:02.983187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.441 [2024-11-19 11:39:02.983245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:02.983260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:02.983266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:02.983272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:02.983286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:02.993219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:02.993295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:02.993309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:02.993316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:02.993322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:02.993336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.003240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.003298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.003313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:03.003320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:03.003329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:03.003344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.013243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.013297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.013311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:03.013318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:03.013324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:03.013338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.023307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.023367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.023381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:03.023388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:03.023394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:03.023408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.033334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.033390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.033404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:03.033410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:03.033416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:03.033431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.043340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.043393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.043407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:03.043414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:03.043419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:03.043434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.053372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.053427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.053441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:03.053448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:03.053454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:03.053468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.063437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.063489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.063503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:03.063509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:03.063515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:03.063530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.073437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.073494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.073507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:03.073514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:03.073520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:03.073534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.083466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.083520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.083534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:03.083540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:03.083547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:03.083561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.093543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.093619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.093636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:03.093643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:03.093649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:03.093663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.103515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.103567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.103581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.442 [2024-11-19 11:39:03.103588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.442 [2024-11-19 11:39:03.103594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.442 [2024-11-19 11:39:03.103609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.442 qpair failed and we were unable to recover it. 00:27:49.442 [2024-11-19 11:39:03.113613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.442 [2024-11-19 11:39:03.113670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.442 [2024-11-19 11:39:03.113684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.443 [2024-11-19 11:39:03.113691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.443 [2024-11-19 11:39:03.113696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.443 [2024-11-19 11:39:03.113711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.443 qpair failed and we were unable to recover it. 00:27:49.443 [2024-11-19 11:39:03.123622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.443 [2024-11-19 11:39:03.123674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.443 [2024-11-19 11:39:03.123687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.443 [2024-11-19 11:39:03.123694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.443 [2024-11-19 11:39:03.123700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.443 [2024-11-19 11:39:03.123713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.443 qpair failed and we were unable to recover it. 00:27:49.443 [2024-11-19 11:39:03.133588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.443 [2024-11-19 11:39:03.133690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.443 [2024-11-19 11:39:03.133704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.443 [2024-11-19 11:39:03.133710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.443 [2024-11-19 11:39:03.133720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.443 [2024-11-19 11:39:03.133734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.443 qpair failed and we were unable to recover it. 00:27:49.443 [2024-11-19 11:39:03.143679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.443 [2024-11-19 11:39:03.143729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.443 [2024-11-19 11:39:03.143744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.443 [2024-11-19 11:39:03.143750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.443 [2024-11-19 11:39:03.143756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.443 [2024-11-19 11:39:03.143771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.443 qpair failed and we were unable to recover it. 00:27:49.443 [2024-11-19 11:39:03.153702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.443 [2024-11-19 11:39:03.153760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.443 [2024-11-19 11:39:03.153775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.443 [2024-11-19 11:39:03.153781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.443 [2024-11-19 11:39:03.153788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.443 [2024-11-19 11:39:03.153802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.443 qpair failed and we were unable to recover it. 00:27:49.443 [2024-11-19 11:39:03.163715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.443 [2024-11-19 11:39:03.163771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.443 [2024-11-19 11:39:03.163785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.443 [2024-11-19 11:39:03.163792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.443 [2024-11-19 11:39:03.163798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.443 [2024-11-19 11:39:03.163812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.443 qpair failed and we were unable to recover it. 00:27:49.443 [2024-11-19 11:39:03.173651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.443 [2024-11-19 11:39:03.173706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.443 [2024-11-19 11:39:03.173720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.443 [2024-11-19 11:39:03.173727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.443 [2024-11-19 11:39:03.173732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.443 [2024-11-19 11:39:03.173746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.443 qpair failed and we were unable to recover it. 00:27:49.443 [2024-11-19 11:39:03.183780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.443 [2024-11-19 11:39:03.183836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.443 [2024-11-19 11:39:03.183851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.443 [2024-11-19 11:39:03.183858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.443 [2024-11-19 11:39:03.183864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.443 [2024-11-19 11:39:03.183879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.443 qpair failed and we were unable to recover it. 00:27:49.443 [2024-11-19 11:39:03.193806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.443 [2024-11-19 11:39:03.193864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.443 [2024-11-19 11:39:03.193879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.443 [2024-11-19 11:39:03.193886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.443 [2024-11-19 11:39:03.193892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.443 [2024-11-19 11:39:03.193906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.443 qpair failed and we were unable to recover it. 00:27:49.443 [2024-11-19 11:39:03.203818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.443 [2024-11-19 11:39:03.203876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.443 [2024-11-19 11:39:03.203890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.443 [2024-11-19 11:39:03.203897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.443 [2024-11-19 11:39:03.203903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.443 [2024-11-19 11:39:03.203918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.443 qpair failed and we were unable to recover it. 00:27:49.443 [2024-11-19 11:39:03.213857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.443 [2024-11-19 11:39:03.213911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.443 [2024-11-19 11:39:03.213925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.443 [2024-11-19 11:39:03.213932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.443 [2024-11-19 11:39:03.213938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.443 [2024-11-19 11:39:03.213959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.443 qpair failed and we were unable to recover it. 00:27:49.704 [2024-11-19 11:39:03.223821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.704 [2024-11-19 11:39:03.223876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.704 [2024-11-19 11:39:03.223893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.704 [2024-11-19 11:39:03.223900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.704 [2024-11-19 11:39:03.223906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.704 [2024-11-19 11:39:03.223921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.704 qpair failed and we were unable to recover it. 00:27:49.704 [2024-11-19 11:39:03.233839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.704 [2024-11-19 11:39:03.233897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.704 [2024-11-19 11:39:03.233911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.704 [2024-11-19 11:39:03.233918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.704 [2024-11-19 11:39:03.233924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.704 [2024-11-19 11:39:03.233938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.704 qpair failed and we were unable to recover it. 00:27:49.704 [2024-11-19 11:39:03.243934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.704 [2024-11-19 11:39:03.244004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.704 [2024-11-19 11:39:03.244019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.704 [2024-11-19 11:39:03.244026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.704 [2024-11-19 11:39:03.244032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.704 [2024-11-19 11:39:03.244046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.704 qpair failed and we were unable to recover it. 00:27:49.704 [2024-11-19 11:39:03.253919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.704 [2024-11-19 11:39:03.253981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.704 [2024-11-19 11:39:03.253996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.704 [2024-11-19 11:39:03.254003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.704 [2024-11-19 11:39:03.254009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.704 [2024-11-19 11:39:03.254024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.704 qpair failed and we were unable to recover it. 00:27:49.704 [2024-11-19 11:39:03.263962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.704 [2024-11-19 11:39:03.264021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.704 [2024-11-19 11:39:03.264035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.704 [2024-11-19 11:39:03.264042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.704 [2024-11-19 11:39:03.264051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.704 [2024-11-19 11:39:03.264067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.704 qpair failed and we were unable to recover it. 00:27:49.704 [2024-11-19 11:39:03.273998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.704 [2024-11-19 11:39:03.274055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.274069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.274076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.274082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.274097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.284023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.284088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.284102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.284108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.284114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.284129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.294077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.294148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.294163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.294169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.294176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.294190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.304098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.304156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.304170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.304176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.304182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.304197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.314139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.314196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.314210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.314217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.314223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.314237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.324152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.324237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.324251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.324257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.324263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.324277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.334237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.334293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.334307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.334314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.334320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.334334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.344250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.344301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.344315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.344322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.344328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.344342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.354278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.354338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.354358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.354365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.354371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.354386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.364266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.364320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.364334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.364340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.364346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.364360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.374333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.374383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.374397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.374404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.374409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.374424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.384328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.384380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.384394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.384400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.384406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.384420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.394402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.394465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.394478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.705 [2024-11-19 11:39:03.394485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.705 [2024-11-19 11:39:03.394494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.705 [2024-11-19 11:39:03.394508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.705 qpair failed and we were unable to recover it. 00:27:49.705 [2024-11-19 11:39:03.404428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.705 [2024-11-19 11:39:03.404479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.705 [2024-11-19 11:39:03.404493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.706 [2024-11-19 11:39:03.404499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.706 [2024-11-19 11:39:03.404505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.706 [2024-11-19 11:39:03.404519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.706 qpair failed and we were unable to recover it. 00:27:49.706 [2024-11-19 11:39:03.414395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.706 [2024-11-19 11:39:03.414449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.706 [2024-11-19 11:39:03.414462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.706 [2024-11-19 11:39:03.414469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.706 [2024-11-19 11:39:03.414475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.706 [2024-11-19 11:39:03.414490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.706 qpair failed and we were unable to recover it. 00:27:49.706 [2024-11-19 11:39:03.424474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.706 [2024-11-19 11:39:03.424527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.706 [2024-11-19 11:39:03.424541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.706 [2024-11-19 11:39:03.424547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.706 [2024-11-19 11:39:03.424554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.706 [2024-11-19 11:39:03.424568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.706 qpair failed and we were unable to recover it. 00:27:49.706 [2024-11-19 11:39:03.434403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.706 [2024-11-19 11:39:03.434458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.706 [2024-11-19 11:39:03.434472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.706 [2024-11-19 11:39:03.434479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.706 [2024-11-19 11:39:03.434484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.706 [2024-11-19 11:39:03.434499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.706 qpair failed and we were unable to recover it. 00:27:49.706 [2024-11-19 11:39:03.444453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.706 [2024-11-19 11:39:03.444510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.706 [2024-11-19 11:39:03.444524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.706 [2024-11-19 11:39:03.444530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.706 [2024-11-19 11:39:03.444536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.706 [2024-11-19 11:39:03.444550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.706 qpair failed and we were unable to recover it. 00:27:49.706 [2024-11-19 11:39:03.454576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.706 [2024-11-19 11:39:03.454633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.706 [2024-11-19 11:39:03.454647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.706 [2024-11-19 11:39:03.454654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.706 [2024-11-19 11:39:03.454660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.706 [2024-11-19 11:39:03.454675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.706 qpair failed and we were unable to recover it. 00:27:49.706 [2024-11-19 11:39:03.464484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.706 [2024-11-19 11:39:03.464568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.706 [2024-11-19 11:39:03.464582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.706 [2024-11-19 11:39:03.464589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.706 [2024-11-19 11:39:03.464595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.706 [2024-11-19 11:39:03.464610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.706 qpair failed and we were unable to recover it. 00:27:49.706 [2024-11-19 11:39:03.474539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.706 [2024-11-19 11:39:03.474594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.706 [2024-11-19 11:39:03.474609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.706 [2024-11-19 11:39:03.474615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.706 [2024-11-19 11:39:03.474621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.706 [2024-11-19 11:39:03.474635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.706 qpair failed and we were unable to recover it. 00:27:49.967 [2024-11-19 11:39:03.484745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.967 [2024-11-19 11:39:03.484810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.967 [2024-11-19 11:39:03.484827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.967 [2024-11-19 11:39:03.484834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.967 [2024-11-19 11:39:03.484840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.967 [2024-11-19 11:39:03.484854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-11-19 11:39:03.494635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.967 [2024-11-19 11:39:03.494687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.967 [2024-11-19 11:39:03.494701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.967 [2024-11-19 11:39:03.494708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.967 [2024-11-19 11:39:03.494714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.967 [2024-11-19 11:39:03.494729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-11-19 11:39:03.504687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.967 [2024-11-19 11:39:03.504759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.967 [2024-11-19 11:39:03.504773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.967 [2024-11-19 11:39:03.504779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.967 [2024-11-19 11:39:03.504786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.967 [2024-11-19 11:39:03.504800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-11-19 11:39:03.514674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.967 [2024-11-19 11:39:03.514733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.967 [2024-11-19 11:39:03.514747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.967 [2024-11-19 11:39:03.514754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.967 [2024-11-19 11:39:03.514760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.967 [2024-11-19 11:39:03.514774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-11-19 11:39:03.524660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.967 [2024-11-19 11:39:03.524716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.967 [2024-11-19 11:39:03.524730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.967 [2024-11-19 11:39:03.524737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.967 [2024-11-19 11:39:03.524746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.967 [2024-11-19 11:39:03.524761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-11-19 11:39:03.534800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.967 [2024-11-19 11:39:03.534859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.967 [2024-11-19 11:39:03.534880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.967 [2024-11-19 11:39:03.534886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.967 [2024-11-19 11:39:03.534893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.967 [2024-11-19 11:39:03.534908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-11-19 11:39:03.544727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.967 [2024-11-19 11:39:03.544783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.967 [2024-11-19 11:39:03.544797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.967 [2024-11-19 11:39:03.544804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.967 [2024-11-19 11:39:03.544810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.967 [2024-11-19 11:39:03.544825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-11-19 11:39:03.554809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.967 [2024-11-19 11:39:03.554884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.967 [2024-11-19 11:39:03.554898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.967 [2024-11-19 11:39:03.554905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.967 [2024-11-19 11:39:03.554911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.967 [2024-11-19 11:39:03.554925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-11-19 11:39:03.564800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.967 [2024-11-19 11:39:03.564858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.564872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.564879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.564884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.564899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.574912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.574977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.574992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.574998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.575004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.575019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.584896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.584953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.584967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.584974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.584980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.584994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.594942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.595006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.595020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.595026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.595032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.595047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.604960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.605052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.605067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.605073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.605079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.605094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.614923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.614983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.615003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.615010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.615016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.615031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.625030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.625082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.625097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.625103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.625109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.625123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.635097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.635157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.635173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.635181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.635187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.635203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.645150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.645214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.645228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.645235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.645241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.645256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.655118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.655173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.655187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.655194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.655204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.655218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.665131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.665184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.665198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.665204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.665210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.665224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.675172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.675227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.675242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.675248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.675254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.675268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.685238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.685295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.968 [2024-11-19 11:39:03.685309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.968 [2024-11-19 11:39:03.685316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.968 [2024-11-19 11:39:03.685322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.968 [2024-11-19 11:39:03.685336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-11-19 11:39:03.695209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.968 [2024-11-19 11:39:03.695264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.969 [2024-11-19 11:39:03.695277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.969 [2024-11-19 11:39:03.695284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.969 [2024-11-19 11:39:03.695290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.969 [2024-11-19 11:39:03.695303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-11-19 11:39:03.705257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.969 [2024-11-19 11:39:03.705310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.969 [2024-11-19 11:39:03.705323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.969 [2024-11-19 11:39:03.705330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.969 [2024-11-19 11:39:03.705336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.969 [2024-11-19 11:39:03.705350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-11-19 11:39:03.715304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.969 [2024-11-19 11:39:03.715359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.969 [2024-11-19 11:39:03.715373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.969 [2024-11-19 11:39:03.715380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.969 [2024-11-19 11:39:03.715386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.969 [2024-11-19 11:39:03.715400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-11-19 11:39:03.725327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.969 [2024-11-19 11:39:03.725383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.969 [2024-11-19 11:39:03.725397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.969 [2024-11-19 11:39:03.725404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.969 [2024-11-19 11:39:03.725410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.969 [2024-11-19 11:39:03.725424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-11-19 11:39:03.735312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.969 [2024-11-19 11:39:03.735374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.969 [2024-11-19 11:39:03.735389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.969 [2024-11-19 11:39:03.735396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.969 [2024-11-19 11:39:03.735402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:49.969 [2024-11-19 11:39:03.735416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:49.969 qpair failed and we were unable to recover it. 00:27:50.230 [2024-11-19 11:39:03.745372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.230 [2024-11-19 11:39:03.745425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.230 [2024-11-19 11:39:03.745442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.230 [2024-11-19 11:39:03.745449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.230 [2024-11-19 11:39:03.745455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.230 [2024-11-19 11:39:03.745470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.230 qpair failed and we were unable to recover it. 00:27:50.230 [2024-11-19 11:39:03.755404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.230 [2024-11-19 11:39:03.755500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.230 [2024-11-19 11:39:03.755514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.230 [2024-11-19 11:39:03.755520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.230 [2024-11-19 11:39:03.755526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.230 [2024-11-19 11:39:03.755540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.230 qpair failed and we were unable to recover it. 00:27:50.230 [2024-11-19 11:39:03.765438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.230 [2024-11-19 11:39:03.765500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.230 [2024-11-19 11:39:03.765515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.230 [2024-11-19 11:39:03.765522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.230 [2024-11-19 11:39:03.765527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.230 [2024-11-19 11:39:03.765542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.230 qpair failed and we were unable to recover it. 00:27:50.230 [2024-11-19 11:39:03.775461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.230 [2024-11-19 11:39:03.775547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.230 [2024-11-19 11:39:03.775561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.230 [2024-11-19 11:39:03.775567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.230 [2024-11-19 11:39:03.775573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.230 [2024-11-19 11:39:03.775587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.230 qpair failed and we were unable to recover it. 00:27:50.230 [2024-11-19 11:39:03.785486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.230 [2024-11-19 11:39:03.785541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.230 [2024-11-19 11:39:03.785555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.230 [2024-11-19 11:39:03.785561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.230 [2024-11-19 11:39:03.785571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.230 [2024-11-19 11:39:03.785585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.230 qpair failed and we were unable to recover it. 00:27:50.230 [2024-11-19 11:39:03.795524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.230 [2024-11-19 11:39:03.795582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.230 [2024-11-19 11:39:03.795596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.230 [2024-11-19 11:39:03.795604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.230 [2024-11-19 11:39:03.795610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.230 [2024-11-19 11:39:03.795624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.230 qpair failed and we were unable to recover it. 00:27:50.230 [2024-11-19 11:39:03.805557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.230 [2024-11-19 11:39:03.805614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.230 [2024-11-19 11:39:03.805629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.230 [2024-11-19 11:39:03.805635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.230 [2024-11-19 11:39:03.805641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.230 [2024-11-19 11:39:03.805655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.230 qpair failed and we were unable to recover it. 00:27:50.230 [2024-11-19 11:39:03.815579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.230 [2024-11-19 11:39:03.815642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.230 [2024-11-19 11:39:03.815655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.230 [2024-11-19 11:39:03.815662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.230 [2024-11-19 11:39:03.815668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.230 [2024-11-19 11:39:03.815682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.230 qpair failed and we were unable to recover it. 00:27:50.230 [2024-11-19 11:39:03.825634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.230 [2024-11-19 11:39:03.825698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.230 [2024-11-19 11:39:03.825712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.230 [2024-11-19 11:39:03.825718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.230 [2024-11-19 11:39:03.825725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.230 [2024-11-19 11:39:03.825739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.230 qpair failed and we were unable to recover it. 00:27:50.230 [2024-11-19 11:39:03.835639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.230 [2024-11-19 11:39:03.835695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.230 [2024-11-19 11:39:03.835709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.230 [2024-11-19 11:39:03.835716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.230 [2024-11-19 11:39:03.835722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.835737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.845708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.845767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.845781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.845788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.845793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.845807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.855689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.855744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.855758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.855764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.855770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.855785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.865727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.865780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.865794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.865800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.865806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.865820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.875720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.875780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.875798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.875805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.875811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.875825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.885773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.885824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.885839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.885845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.885851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.885865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.895840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.895893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.895907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.895914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.895920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.895935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.905881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.905936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.905954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.905961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.905967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.905982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.915887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.915943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.915961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.915968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.915977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.915992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.925900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.925957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.925972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.925978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.925984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.925999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.935927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.935987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.936000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.936007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.936013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.936027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.945955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.946005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.946019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.946026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.946031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.946046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.956007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.956063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.956077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.956083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.956089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.231 [2024-11-19 11:39:03.956104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.231 qpair failed and we were unable to recover it. 00:27:50.231 [2024-11-19 11:39:03.966020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.231 [2024-11-19 11:39:03.966080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.231 [2024-11-19 11:39:03.966094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.231 [2024-11-19 11:39:03.966100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.231 [2024-11-19 11:39:03.966106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.232 [2024-11-19 11:39:03.966121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.232 qpair failed and we were unable to recover it. 00:27:50.232 [2024-11-19 11:39:03.976010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.232 [2024-11-19 11:39:03.976076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.232 [2024-11-19 11:39:03.976090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.232 [2024-11-19 11:39:03.976096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.232 [2024-11-19 11:39:03.976102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.232 [2024-11-19 11:39:03.976117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.232 qpair failed and we were unable to recover it. 00:27:50.232 [2024-11-19 11:39:03.986096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.232 [2024-11-19 11:39:03.986169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.232 [2024-11-19 11:39:03.986184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.232 [2024-11-19 11:39:03.986190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.232 [2024-11-19 11:39:03.986196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.232 [2024-11-19 11:39:03.986210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.232 qpair failed and we were unable to recover it. 00:27:50.232 [2024-11-19 11:39:03.996111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.232 [2024-11-19 11:39:03.996170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.232 [2024-11-19 11:39:03.996183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.232 [2024-11-19 11:39:03.996190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.232 [2024-11-19 11:39:03.996196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.232 [2024-11-19 11:39:03.996210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.232 qpair failed and we were unable to recover it. 00:27:50.232 [2024-11-19 11:39:04.006167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.232 [2024-11-19 11:39:04.006226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.232 [2024-11-19 11:39:04.006243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.232 [2024-11-19 11:39:04.006249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.232 [2024-11-19 11:39:04.006255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.232 [2024-11-19 11:39:04.006269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.232 qpair failed and we were unable to recover it. 00:27:50.493 [2024-11-19 11:39:04.016174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.493 [2024-11-19 11:39:04.016250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.493 [2024-11-19 11:39:04.016264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.493 [2024-11-19 11:39:04.016272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.493 [2024-11-19 11:39:04.016278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.493 [2024-11-19 11:39:04.016292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.493 qpair failed and we were unable to recover it. 00:27:50.493 [2024-11-19 11:39:04.026142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.493 [2024-11-19 11:39:04.026229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.493 [2024-11-19 11:39:04.026244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.493 [2024-11-19 11:39:04.026250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.493 [2024-11-19 11:39:04.026256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.493 [2024-11-19 11:39:04.026270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.493 qpair failed and we were unable to recover it. 00:27:50.493 [2024-11-19 11:39:04.036238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.493 [2024-11-19 11:39:04.036295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.493 [2024-11-19 11:39:04.036309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.493 [2024-11-19 11:39:04.036316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.493 [2024-11-19 11:39:04.036322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.493 [2024-11-19 11:39:04.036336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.493 qpair failed and we were unable to recover it. 00:27:50.493 [2024-11-19 11:39:04.046259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.493 [2024-11-19 11:39:04.046339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.493 [2024-11-19 11:39:04.046353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.493 [2024-11-19 11:39:04.046360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.493 [2024-11-19 11:39:04.046366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.493 [2024-11-19 11:39:04.046384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.493 qpair failed and we were unable to recover it. 00:27:50.493 [2024-11-19 11:39:04.056291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.493 [2024-11-19 11:39:04.056347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.493 [2024-11-19 11:39:04.056361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.493 [2024-11-19 11:39:04.056368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.493 [2024-11-19 11:39:04.056374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.493 [2024-11-19 11:39:04.056388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.493 qpair failed and we were unable to recover it. 00:27:50.493 [2024-11-19 11:39:04.066313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.493 [2024-11-19 11:39:04.066402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.493 [2024-11-19 11:39:04.066416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.493 [2024-11-19 11:39:04.066423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.493 [2024-11-19 11:39:04.066429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.493 [2024-11-19 11:39:04.066443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.493 qpair failed and we were unable to recover it. 00:27:50.493 [2024-11-19 11:39:04.076316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.493 [2024-11-19 11:39:04.076408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.493 [2024-11-19 11:39:04.076421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.493 [2024-11-19 11:39:04.076428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.493 [2024-11-19 11:39:04.076434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.493 [2024-11-19 11:39:04.076448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.493 qpair failed and we were unable to recover it. 00:27:50.493 [2024-11-19 11:39:04.086386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.493 [2024-11-19 11:39:04.086443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.086457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.086463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.086469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.086483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.096405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.096456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.096470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.096476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.096482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.096496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.106368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.106423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.106437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.106443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.106449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.106463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.116394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.116500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.116515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.116521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.116527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.116541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.126492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.126551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.126565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.126571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.126577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.126591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.136521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.136571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.136588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.136594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.136600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.136615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.146539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.146595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.146608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.146615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.146621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.146635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.156620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.156719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.156733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.156739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.156745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.156760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.166615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.166678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.166692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.166699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.166704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.166719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.176668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.176722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.176736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.176742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.176748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.176766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.186626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.186684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.186697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.186704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.186710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.186724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.196684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.196753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.196767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.196774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.196779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.196794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.206696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.206798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.206811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.494 [2024-11-19 11:39:04.206818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.494 [2024-11-19 11:39:04.206824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.494 [2024-11-19 11:39:04.206838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.494 qpair failed and we were unable to recover it. 00:27:50.494 [2024-11-19 11:39:04.216738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.494 [2024-11-19 11:39:04.216836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.494 [2024-11-19 11:39:04.216850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.495 [2024-11-19 11:39:04.216857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.495 [2024-11-19 11:39:04.216862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.495 [2024-11-19 11:39:04.216877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.495 qpair failed and we were unable to recover it. 00:27:50.495 [2024-11-19 11:39:04.226801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.495 [2024-11-19 11:39:04.226862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.495 [2024-11-19 11:39:04.226876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.495 [2024-11-19 11:39:04.226883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.495 [2024-11-19 11:39:04.226889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.495 [2024-11-19 11:39:04.226904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.495 qpair failed and we were unable to recover it. 00:27:50.495 [2024-11-19 11:39:04.236798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.495 [2024-11-19 11:39:04.236854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.495 [2024-11-19 11:39:04.236869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.495 [2024-11-19 11:39:04.236875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.495 [2024-11-19 11:39:04.236881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.495 [2024-11-19 11:39:04.236896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.495 qpair failed and we were unable to recover it. 00:27:50.495 [2024-11-19 11:39:04.246828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.495 [2024-11-19 11:39:04.246887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.495 [2024-11-19 11:39:04.246900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.495 [2024-11-19 11:39:04.246907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.495 [2024-11-19 11:39:04.246913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.495 [2024-11-19 11:39:04.246927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.495 qpair failed and we were unable to recover it. 00:27:50.495 [2024-11-19 11:39:04.256852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.495 [2024-11-19 11:39:04.256905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.495 [2024-11-19 11:39:04.256919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.495 [2024-11-19 11:39:04.256926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.495 [2024-11-19 11:39:04.256932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.495 [2024-11-19 11:39:04.256950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.495 qpair failed and we were unable to recover it. 00:27:50.495 [2024-11-19 11:39:04.266951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.495 [2024-11-19 11:39:04.267005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.495 [2024-11-19 11:39:04.267026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.495 [2024-11-19 11:39:04.267032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.495 [2024-11-19 11:39:04.267038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.495 [2024-11-19 11:39:04.267052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.495 qpair failed and we were unable to recover it. 00:27:50.756 [2024-11-19 11:39:04.276908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.756 [2024-11-19 11:39:04.276970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.756 [2024-11-19 11:39:04.276984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.756 [2024-11-19 11:39:04.276991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.756 [2024-11-19 11:39:04.276998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.756 [2024-11-19 11:39:04.277012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.756 qpair failed and we were unable to recover it. 00:27:50.756 [2024-11-19 11:39:04.286954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.756 [2024-11-19 11:39:04.287009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.756 [2024-11-19 11:39:04.287022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.756 [2024-11-19 11:39:04.287028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.756 [2024-11-19 11:39:04.287034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.756 [2024-11-19 11:39:04.287049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.756 qpair failed and we were unable to recover it. 00:27:50.756 [2024-11-19 11:39:04.297005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.756 [2024-11-19 11:39:04.297101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.756 [2024-11-19 11:39:04.297117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.756 [2024-11-19 11:39:04.297124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.756 [2024-11-19 11:39:04.297130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.756 [2024-11-19 11:39:04.297145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.756 qpair failed and we were unable to recover it. 00:27:50.756 [2024-11-19 11:39:04.307023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.756 [2024-11-19 11:39:04.307085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.756 [2024-11-19 11:39:04.307099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.756 [2024-11-19 11:39:04.307106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.756 [2024-11-19 11:39:04.307112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.756 [2024-11-19 11:39:04.307129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.756 qpair failed and we were unable to recover it. 00:27:50.756 [2024-11-19 11:39:04.317045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.756 [2024-11-19 11:39:04.317114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.756 [2024-11-19 11:39:04.317128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.756 [2024-11-19 11:39:04.317134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.756 [2024-11-19 11:39:04.317141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.756 [2024-11-19 11:39:04.317155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.756 qpair failed and we were unable to recover it. 00:27:50.756 [2024-11-19 11:39:04.327081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.756 [2024-11-19 11:39:04.327184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.327199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.327205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.327211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.327225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.337099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.337152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.337166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.337172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.337178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.337192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.347062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.347116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.347130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.347136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.347142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.347156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.357158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.357217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.357232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.357239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.357245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.357259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.367181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.367249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.367263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.367269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.367275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.367289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.377210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.377266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.377281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.377287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.377293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.377308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.387233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.387289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.387303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.387310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.387316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.387330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.397269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.397325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.397342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.397349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.397355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.397370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.407327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.407432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.407446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.407452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.407458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.407473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.417319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.417403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.417416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.417423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.417429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.417442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.427341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.427411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.427426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.427433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.427438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.427453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.437409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.437466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.437480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.437487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.437493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.437511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.447436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.447487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.757 [2024-11-19 11:39:04.447501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.757 [2024-11-19 11:39:04.447507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.757 [2024-11-19 11:39:04.447513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.757 [2024-11-19 11:39:04.447527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.757 qpair failed and we were unable to recover it. 00:27:50.757 [2024-11-19 11:39:04.457425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.757 [2024-11-19 11:39:04.457474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.758 [2024-11-19 11:39:04.457489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.758 [2024-11-19 11:39:04.457496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.758 [2024-11-19 11:39:04.457501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.758 [2024-11-19 11:39:04.457516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.758 qpair failed and we were unable to recover it. 00:27:50.758 [2024-11-19 11:39:04.467455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.758 [2024-11-19 11:39:04.467508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.758 [2024-11-19 11:39:04.467522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.758 [2024-11-19 11:39:04.467529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.758 [2024-11-19 11:39:04.467535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.758 [2024-11-19 11:39:04.467549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.758 qpair failed and we were unable to recover it. 00:27:50.758 [2024-11-19 11:39:04.477490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.758 [2024-11-19 11:39:04.477548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.758 [2024-11-19 11:39:04.477562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.758 [2024-11-19 11:39:04.477568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.758 [2024-11-19 11:39:04.477574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.758 [2024-11-19 11:39:04.477589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.758 qpair failed and we were unable to recover it. 00:27:50.758 [2024-11-19 11:39:04.487518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.758 [2024-11-19 11:39:04.487573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.758 [2024-11-19 11:39:04.487588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.758 [2024-11-19 11:39:04.487594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.758 [2024-11-19 11:39:04.487600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.758 [2024-11-19 11:39:04.487614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.758 qpair failed and we were unable to recover it. 00:27:50.758 [2024-11-19 11:39:04.497547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.758 [2024-11-19 11:39:04.497601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.758 [2024-11-19 11:39:04.497614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.758 [2024-11-19 11:39:04.497620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.758 [2024-11-19 11:39:04.497626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.758 [2024-11-19 11:39:04.497641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.758 qpair failed and we were unable to recover it. 00:27:50.758 [2024-11-19 11:39:04.507616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.758 [2024-11-19 11:39:04.507673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.758 [2024-11-19 11:39:04.507686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.758 [2024-11-19 11:39:04.507693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.758 [2024-11-19 11:39:04.507698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.758 [2024-11-19 11:39:04.507713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.758 qpair failed and we were unable to recover it. 00:27:50.758 [2024-11-19 11:39:04.517600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.758 [2024-11-19 11:39:04.517655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.758 [2024-11-19 11:39:04.517670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.758 [2024-11-19 11:39:04.517676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.758 [2024-11-19 11:39:04.517682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.758 [2024-11-19 11:39:04.517696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.758 qpair failed and we were unable to recover it. 00:27:50.758 [2024-11-19 11:39:04.527678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.758 [2024-11-19 11:39:04.527732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.758 [2024-11-19 11:39:04.527749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.758 [2024-11-19 11:39:04.527755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.758 [2024-11-19 11:39:04.527761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:50.758 [2024-11-19 11:39:04.527775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.758 qpair failed and we were unable to recover it. 00:27:51.020 [2024-11-19 11:39:04.537619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.020 [2024-11-19 11:39:04.537678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.020 [2024-11-19 11:39:04.537692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.020 [2024-11-19 11:39:04.537699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.020 [2024-11-19 11:39:04.537704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.020 [2024-11-19 11:39:04.537719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.020 qpair failed and we were unable to recover it. 00:27:51.020 [2024-11-19 11:39:04.547694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.020 [2024-11-19 11:39:04.547749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.020 [2024-11-19 11:39:04.547763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.020 [2024-11-19 11:39:04.547769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.020 [2024-11-19 11:39:04.547775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.020 [2024-11-19 11:39:04.547789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.020 qpair failed and we were unable to recover it. 00:27:51.020 [2024-11-19 11:39:04.557742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.020 [2024-11-19 11:39:04.557801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.020 [2024-11-19 11:39:04.557816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.020 [2024-11-19 11:39:04.557822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.020 [2024-11-19 11:39:04.557828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.020 [2024-11-19 11:39:04.557843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.020 qpair failed and we were unable to recover it. 00:27:51.020 [2024-11-19 11:39:04.567743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.020 [2024-11-19 11:39:04.567796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.020 [2024-11-19 11:39:04.567810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.020 [2024-11-19 11:39:04.567816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.020 [2024-11-19 11:39:04.567822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.020 [2024-11-19 11:39:04.567839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.020 qpair failed and we were unable to recover it. 00:27:51.020 [2024-11-19 11:39:04.577761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.020 [2024-11-19 11:39:04.577816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.577830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.577837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.577843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.577857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.587813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.587866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.587881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.587888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.587894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.587907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.597814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.597872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.597886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.597892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.597898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.597913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.607907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.607992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.608008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.608015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.608020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.608035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.617871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.617928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.617943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.617955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.617961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.617976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.627980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.628062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.628078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.628085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.628091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.628107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.637892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.637969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.637984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.637991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.637997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.638012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.647996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.648054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.648068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.648075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.648081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.648096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.657992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.658047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.658065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.658072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.658078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.658092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.668074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.668153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.668168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.668174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.668180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.668195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.678044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.678105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.678119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.678126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.678132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.678146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.688115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.688171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.688186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.688192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.688199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.688213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.698134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.698219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.021 [2024-11-19 11:39:04.698233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.021 [2024-11-19 11:39:04.698240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.021 [2024-11-19 11:39:04.698245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.021 [2024-11-19 11:39:04.698263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.021 qpair failed and we were unable to recover it. 00:27:51.021 [2024-11-19 11:39:04.708148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.021 [2024-11-19 11:39:04.708211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.022 [2024-11-19 11:39:04.708224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.022 [2024-11-19 11:39:04.708231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.022 [2024-11-19 11:39:04.708237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.022 [2024-11-19 11:39:04.708252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.022 qpair failed and we were unable to recover it. 00:27:51.022 [2024-11-19 11:39:04.718172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.022 [2024-11-19 11:39:04.718228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.022 [2024-11-19 11:39:04.718243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.022 [2024-11-19 11:39:04.718249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.022 [2024-11-19 11:39:04.718256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.022 [2024-11-19 11:39:04.718270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.022 qpair failed and we were unable to recover it. 00:27:51.022 [2024-11-19 11:39:04.728178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.022 [2024-11-19 11:39:04.728278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.022 [2024-11-19 11:39:04.728293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.022 [2024-11-19 11:39:04.728299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.022 [2024-11-19 11:39:04.728306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.022 [2024-11-19 11:39:04.728320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.022 qpair failed and we were unable to recover it. 00:27:51.022 [2024-11-19 11:39:04.738162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.022 [2024-11-19 11:39:04.738225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.022 [2024-11-19 11:39:04.738239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.022 [2024-11-19 11:39:04.738246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.022 [2024-11-19 11:39:04.738251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.022 [2024-11-19 11:39:04.738266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.022 qpair failed and we were unable to recover it. 00:27:51.022 [2024-11-19 11:39:04.748252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.022 [2024-11-19 11:39:04.748308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.022 [2024-11-19 11:39:04.748323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.022 [2024-11-19 11:39:04.748330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.022 [2024-11-19 11:39:04.748336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.022 [2024-11-19 11:39:04.748351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.022 qpair failed and we were unable to recover it. 00:27:51.022 [2024-11-19 11:39:04.758299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.022 [2024-11-19 11:39:04.758357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.022 [2024-11-19 11:39:04.758372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.022 [2024-11-19 11:39:04.758378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.022 [2024-11-19 11:39:04.758384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.022 [2024-11-19 11:39:04.758399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.022 qpair failed and we were unable to recover it. 00:27:51.022 [2024-11-19 11:39:04.768252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.022 [2024-11-19 11:39:04.768310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.022 [2024-11-19 11:39:04.768324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.022 [2024-11-19 11:39:04.768331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.022 [2024-11-19 11:39:04.768337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.022 [2024-11-19 11:39:04.768352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.022 qpair failed and we were unable to recover it. 00:27:51.022 [2024-11-19 11:39:04.778351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.022 [2024-11-19 11:39:04.778405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.022 [2024-11-19 11:39:04.778419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.022 [2024-11-19 11:39:04.778426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.022 [2024-11-19 11:39:04.778432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.022 [2024-11-19 11:39:04.778447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.022 qpair failed and we were unable to recover it. 00:27:51.022 [2024-11-19 11:39:04.788350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.022 [2024-11-19 11:39:04.788402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.022 [2024-11-19 11:39:04.788419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.022 [2024-11-19 11:39:04.788426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.022 [2024-11-19 11:39:04.788432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.022 [2024-11-19 11:39:04.788446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.022 qpair failed and we were unable to recover it. 00:27:51.283 [2024-11-19 11:39:04.798482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.283 [2024-11-19 11:39:04.798561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.283 [2024-11-19 11:39:04.798575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.798581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.798587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.798601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.808396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.808448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.808462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.808469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.808475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.808488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.818459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.818530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.818543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.818550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.818555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.818570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.828500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.828551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.828565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.828572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.828578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.828596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.838562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.838619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.838633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.838639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.838645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.838659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.848479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.848535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.848549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.848555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.848561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.848575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.858577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.858631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.858645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.858651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.858657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.858671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.868658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.868746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.868760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.868766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.868771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.868786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.878673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.878757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.878771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.878778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.878783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.878798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.888690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.888744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.888758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.888764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.888770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.888785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.898716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.898774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.898789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.898795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.898801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.898815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.908729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.908804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.908819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.908826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.908832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.908846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.918759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.284 [2024-11-19 11:39:04.918815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.284 [2024-11-19 11:39:04.918837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.284 [2024-11-19 11:39:04.918846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.284 [2024-11-19 11:39:04.918852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.284 [2024-11-19 11:39:04.918867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.284 qpair failed and we were unable to recover it. 00:27:51.284 [2024-11-19 11:39:04.928799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:04.928858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:04.928872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:04.928879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:04.928885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:04.928899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:04.938817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:04.938875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:04.938890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:04.938896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:04.938903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:04.938918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:04.948796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:04.948851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:04.948865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:04.948872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:04.948878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:04.948892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:04.958805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:04.958862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:04.958877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:04.958884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:04.958890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:04.958908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:04.968885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:04.968966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:04.968981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:04.968987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:04.968993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:04.969007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:04.978906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:04.978967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:04.978981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:04.978988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:04.978994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:04.979008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:04.988902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:04.988995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:04.989008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:04.989015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:04.989020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:04.989034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:04.998986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:04.999041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:04.999055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:04.999062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:04.999067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:04.999082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:05.009022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:05.009089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:05.009103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:05.009110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:05.009116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:05.009131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:05.019070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:05.019124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:05.019138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:05.019145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:05.019151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:05.019165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:05.029082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:05.029141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:05.029155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:05.029162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:05.029168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:05.029182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:05.039110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:05.039202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:05.039217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:05.039223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:05.039229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:05.039243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:05.049096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.285 [2024-11-19 11:39:05.049154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.285 [2024-11-19 11:39:05.049173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.285 [2024-11-19 11:39:05.049179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.285 [2024-11-19 11:39:05.049185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.285 [2024-11-19 11:39:05.049200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.285 qpair failed and we were unable to recover it. 00:27:51.285 [2024-11-19 11:39:05.059158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.286 [2024-11-19 11:39:05.059212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.286 [2024-11-19 11:39:05.059226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.286 [2024-11-19 11:39:05.059233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.286 [2024-11-19 11:39:05.059239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.286 [2024-11-19 11:39:05.059253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.286 qpair failed and we were unable to recover it. 00:27:51.546 [2024-11-19 11:39:05.069113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.546 [2024-11-19 11:39:05.069168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.546 [2024-11-19 11:39:05.069182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.546 [2024-11-19 11:39:05.069189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.546 [2024-11-19 11:39:05.069196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.546 [2024-11-19 11:39:05.069211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.546 qpair failed and we were unable to recover it. 00:27:51.546 [2024-11-19 11:39:05.079188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.546 [2024-11-19 11:39:05.079280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.546 [2024-11-19 11:39:05.079293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.546 [2024-11-19 11:39:05.079300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.546 [2024-11-19 11:39:05.079306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.546 [2024-11-19 11:39:05.079320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.546 qpair failed and we were unable to recover it. 00:27:51.546 [2024-11-19 11:39:05.089230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.546 [2024-11-19 11:39:05.089324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.546 [2024-11-19 11:39:05.089339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.546 [2024-11-19 11:39:05.089345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.546 [2024-11-19 11:39:05.089351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.546 [2024-11-19 11:39:05.089371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.546 qpair failed and we were unable to recover it. 00:27:51.546 [2024-11-19 11:39:05.099327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.546 [2024-11-19 11:39:05.099410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.546 [2024-11-19 11:39:05.099424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.099430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.099436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.099450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.109319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.109373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.109387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.109393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.109399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.109414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.119333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.119390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.119404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.119410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.119416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.119431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.129389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.129446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.129460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.129467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.129473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.129487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.139391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.139443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.139456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.139462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.139469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.139483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.149326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.149381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.149395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.149402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.149408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.149423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.159381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.159446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.159460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.159467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.159473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.159488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.169496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.169556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.169570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.169577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.169583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.169597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.179479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.179565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.179583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.179590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.179596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.179610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.189460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.189519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.189532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.189538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.189544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.189558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.199550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.199608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.199621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.199628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.199634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.199648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.209581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.209633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.209647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.209654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.209660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.209674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.219583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.547 [2024-11-19 11:39:05.219684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.547 [2024-11-19 11:39:05.219698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.547 [2024-11-19 11:39:05.219704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.547 [2024-11-19 11:39:05.219710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.547 [2024-11-19 11:39:05.219728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.547 qpair failed and we were unable to recover it. 00:27:51.547 [2024-11-19 11:39:05.229660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.548 [2024-11-19 11:39:05.229715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.548 [2024-11-19 11:39:05.229730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.548 [2024-11-19 11:39:05.229737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.548 [2024-11-19 11:39:05.229743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.548 [2024-11-19 11:39:05.229757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.548 qpair failed and we were unable to recover it. 00:27:51.548 [2024-11-19 11:39:05.239661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.548 [2024-11-19 11:39:05.239718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.548 [2024-11-19 11:39:05.239733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.548 [2024-11-19 11:39:05.239739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.548 [2024-11-19 11:39:05.239746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.548 [2024-11-19 11:39:05.239760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.548 qpair failed and we were unable to recover it. 00:27:51.548 [2024-11-19 11:39:05.249703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.548 [2024-11-19 11:39:05.249822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.548 [2024-11-19 11:39:05.249837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.548 [2024-11-19 11:39:05.249844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.548 [2024-11-19 11:39:05.249850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.548 [2024-11-19 11:39:05.249866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.548 qpair failed and we were unable to recover it. 00:27:51.548 [2024-11-19 11:39:05.259692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.548 [2024-11-19 11:39:05.259746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.548 [2024-11-19 11:39:05.259760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.548 [2024-11-19 11:39:05.259766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.548 [2024-11-19 11:39:05.259772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.548 [2024-11-19 11:39:05.259787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.548 qpair failed and we were unable to recover it. 00:27:51.548 [2024-11-19 11:39:05.269674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.548 [2024-11-19 11:39:05.269733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.548 [2024-11-19 11:39:05.269747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.548 [2024-11-19 11:39:05.269754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.548 [2024-11-19 11:39:05.269760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.548 [2024-11-19 11:39:05.269774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.548 qpair failed and we were unable to recover it. 00:27:51.548 [2024-11-19 11:39:05.279842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.548 [2024-11-19 11:39:05.279898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.548 [2024-11-19 11:39:05.279913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.548 [2024-11-19 11:39:05.279919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.548 [2024-11-19 11:39:05.279926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.548 [2024-11-19 11:39:05.279940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.548 qpair failed and we were unable to recover it. 00:27:51.548 [2024-11-19 11:39:05.289782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.548 [2024-11-19 11:39:05.289862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.548 [2024-11-19 11:39:05.289876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.548 [2024-11-19 11:39:05.289883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.548 [2024-11-19 11:39:05.289889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.548 [2024-11-19 11:39:05.289903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.548 qpair failed and we were unable to recover it. 00:27:51.548 [2024-11-19 11:39:05.299769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.548 [2024-11-19 11:39:05.299838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.548 [2024-11-19 11:39:05.299854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.548 [2024-11-19 11:39:05.299861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.548 [2024-11-19 11:39:05.299866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.548 [2024-11-19 11:39:05.299881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.548 qpair failed and we were unable to recover it. 00:27:51.548 [2024-11-19 11:39:05.309914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.548 [2024-11-19 11:39:05.310000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.548 [2024-11-19 11:39:05.310017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.548 [2024-11-19 11:39:05.310024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.548 [2024-11-19 11:39:05.310030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.548 [2024-11-19 11:39:05.310044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.548 qpair failed and we were unable to recover it. 00:27:51.548 [2024-11-19 11:39:05.319897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.548 [2024-11-19 11:39:05.319958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.548 [2024-11-19 11:39:05.319972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.548 [2024-11-19 11:39:05.319979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.548 [2024-11-19 11:39:05.319986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.548 [2024-11-19 11:39:05.320001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.548 qpair failed and we were unable to recover it. 00:27:51.809 [2024-11-19 11:39:05.329935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.809 [2024-11-19 11:39:05.329992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.809 [2024-11-19 11:39:05.330006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.809 [2024-11-19 11:39:05.330013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.809 [2024-11-19 11:39:05.330019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.809 [2024-11-19 11:39:05.330034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.809 qpair failed and we were unable to recover it. 00:27:51.809 [2024-11-19 11:39:05.339927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.809 [2024-11-19 11:39:05.339987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.809 [2024-11-19 11:39:05.340001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.809 [2024-11-19 11:39:05.340008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.809 [2024-11-19 11:39:05.340014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.809 [2024-11-19 11:39:05.340029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.809 qpair failed and we were unable to recover it. 00:27:51.809 [2024-11-19 11:39:05.349970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.809 [2024-11-19 11:39:05.350021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.809 [2024-11-19 11:39:05.350035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.809 [2024-11-19 11:39:05.350041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.809 [2024-11-19 11:39:05.350049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.809 [2024-11-19 11:39:05.350068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.809 qpair failed and we were unable to recover it. 00:27:51.809 [2024-11-19 11:39:05.360025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.809 [2024-11-19 11:39:05.360083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.360099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.360106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.360112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.360126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.370046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.370112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.370126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.370133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.370139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.370154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.380112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.380164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.380178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.380185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.380191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.380205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.390073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.390129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.390145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.390152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.390158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.390173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.400187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.400256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.400269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.400276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.400281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.400297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.410160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.410251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.410264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.410270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.410276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.410290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.420223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.420277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.420291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.420297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.420303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.420317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.430186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.430282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.430297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.430303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.430309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.430324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.440244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.440300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.440317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.440324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.440330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.440343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.450277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.450351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.450366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.450372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.450378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.450392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.460339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.460393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.460407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.460413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.460420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.460434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.470335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.470386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.470400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.470406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.470412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.470426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.480375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.480432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.810 [2024-11-19 11:39:05.480447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.810 [2024-11-19 11:39:05.480453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.810 [2024-11-19 11:39:05.480459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.810 [2024-11-19 11:39:05.480476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.810 qpair failed and we were unable to recover it. 00:27:51.810 [2024-11-19 11:39:05.490382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.810 [2024-11-19 11:39:05.490442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.811 [2024-11-19 11:39:05.490457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.811 [2024-11-19 11:39:05.490463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.811 [2024-11-19 11:39:05.490469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.811 [2024-11-19 11:39:05.490484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.811 qpair failed and we were unable to recover it. 00:27:51.811 [2024-11-19 11:39:05.500342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.811 [2024-11-19 11:39:05.500394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.811 [2024-11-19 11:39:05.500408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.811 [2024-11-19 11:39:05.500415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.811 [2024-11-19 11:39:05.500421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.811 [2024-11-19 11:39:05.500435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.811 qpair failed and we were unable to recover it. 00:27:51.811 [2024-11-19 11:39:05.510512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.811 [2024-11-19 11:39:05.510564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.811 [2024-11-19 11:39:05.510578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.811 [2024-11-19 11:39:05.510585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.811 [2024-11-19 11:39:05.510591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.811 [2024-11-19 11:39:05.510606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.811 qpair failed and we were unable to recover it. 00:27:51.811 [2024-11-19 11:39:05.520511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.811 [2024-11-19 11:39:05.520586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.811 [2024-11-19 11:39:05.520600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.811 [2024-11-19 11:39:05.520606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.811 [2024-11-19 11:39:05.520613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.811 [2024-11-19 11:39:05.520627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.811 qpair failed and we were unable to recover it. 00:27:51.811 [2024-11-19 11:39:05.530520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.811 [2024-11-19 11:39:05.530619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.811 [2024-11-19 11:39:05.530633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.811 [2024-11-19 11:39:05.530640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.811 [2024-11-19 11:39:05.530646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.811 [2024-11-19 11:39:05.530661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.811 qpair failed and we were unable to recover it. 00:27:51.811 [2024-11-19 11:39:05.540554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.811 [2024-11-19 11:39:05.540651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.811 [2024-11-19 11:39:05.540665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.811 [2024-11-19 11:39:05.540671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.811 [2024-11-19 11:39:05.540677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.811 [2024-11-19 11:39:05.540691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.811 qpair failed and we were unable to recover it. 00:27:51.811 [2024-11-19 11:39:05.550566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.811 [2024-11-19 11:39:05.550618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.811 [2024-11-19 11:39:05.550632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.811 [2024-11-19 11:39:05.550638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.811 [2024-11-19 11:39:05.550644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.811 [2024-11-19 11:39:05.550659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.811 qpair failed and we were unable to recover it. 00:27:51.811 [2024-11-19 11:39:05.560568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.811 [2024-11-19 11:39:05.560659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.811 [2024-11-19 11:39:05.560675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.811 [2024-11-19 11:39:05.560681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.811 [2024-11-19 11:39:05.560688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.811 [2024-11-19 11:39:05.560702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.811 qpair failed and we were unable to recover it. 00:27:51.811 [2024-11-19 11:39:05.570552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.811 [2024-11-19 11:39:05.570609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.811 [2024-11-19 11:39:05.570629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.811 [2024-11-19 11:39:05.570636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.811 [2024-11-19 11:39:05.570641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.811 [2024-11-19 11:39:05.570656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.811 qpair failed and we were unable to recover it. 00:27:51.811 [2024-11-19 11:39:05.580580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.811 [2024-11-19 11:39:05.580634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.811 [2024-11-19 11:39:05.580648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.811 [2024-11-19 11:39:05.580655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.811 [2024-11-19 11:39:05.580661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:51.811 [2024-11-19 11:39:05.580675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.811 qpair failed and we were unable to recover it. 00:27:52.072 [2024-11-19 11:39:05.590631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.072 [2024-11-19 11:39:05.590685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.072 [2024-11-19 11:39:05.590700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.072 [2024-11-19 11:39:05.590707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.072 [2024-11-19 11:39:05.590714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.072 [2024-11-19 11:39:05.590728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.072 qpair failed and we were unable to recover it. 00:27:52.072 [2024-11-19 11:39:05.600773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.072 [2024-11-19 11:39:05.600875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.072 [2024-11-19 11:39:05.600889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.072 [2024-11-19 11:39:05.600896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.072 [2024-11-19 11:39:05.600902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.072 [2024-11-19 11:39:05.600918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.610798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.610904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.610917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.610923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.610930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.610953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.620832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.620889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.620903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.620909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.620915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.620929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.630805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.630858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.630874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.630881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.630887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.630903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.640871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.640929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.640943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.640953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.640960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.640975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.650881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.650936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.650956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.650962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.650968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.650983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.660867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.660924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.660939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.660946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.660955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.660970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.670930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.671027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.671041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.671047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.671053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.671067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.681005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.681063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.681078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.681084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.681090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.681105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.691030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.691089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.691104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.691111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.691118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.691133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.700943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.701025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.701039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.701049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.701055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.701070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.711048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.711105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.711121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.711128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.711134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.711149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.721085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.721143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.721158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.721165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.721171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.721185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.073 [2024-11-19 11:39:05.731114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.073 [2024-11-19 11:39:05.731194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.073 [2024-11-19 11:39:05.731209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.073 [2024-11-19 11:39:05.731216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.073 [2024-11-19 11:39:05.731222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.073 [2024-11-19 11:39:05.731237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.073 qpair failed and we were unable to recover it. 00:27:52.074 [2024-11-19 11:39:05.741062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.074 [2024-11-19 11:39:05.741155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.074 [2024-11-19 11:39:05.741170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.074 [2024-11-19 11:39:05.741177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.074 [2024-11-19 11:39:05.741183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.074 [2024-11-19 11:39:05.741201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.074 qpair failed and we were unable to recover it. 00:27:52.074 [2024-11-19 11:39:05.751093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.074 [2024-11-19 11:39:05.751161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.074 [2024-11-19 11:39:05.751176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.074 [2024-11-19 11:39:05.751182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.074 [2024-11-19 11:39:05.751188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.074 [2024-11-19 11:39:05.751202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.074 qpair failed and we were unable to recover it. 00:27:52.074 [2024-11-19 11:39:05.761210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.074 [2024-11-19 11:39:05.761265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.074 [2024-11-19 11:39:05.761279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.074 [2024-11-19 11:39:05.761286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.074 [2024-11-19 11:39:05.761292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.074 [2024-11-19 11:39:05.761308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.074 qpair failed and we were unable to recover it. 00:27:52.074 [2024-11-19 11:39:05.771251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.074 [2024-11-19 11:39:05.771312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.074 [2024-11-19 11:39:05.771326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.074 [2024-11-19 11:39:05.771333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.074 [2024-11-19 11:39:05.771340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.074 [2024-11-19 11:39:05.771355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.074 qpair failed and we were unable to recover it. 00:27:52.074 [2024-11-19 11:39:05.781171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.074 [2024-11-19 11:39:05.781225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.074 [2024-11-19 11:39:05.781239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.074 [2024-11-19 11:39:05.781245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.074 [2024-11-19 11:39:05.781251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.074 [2024-11-19 11:39:05.781266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.074 qpair failed and we were unable to recover it. 00:27:52.074 [2024-11-19 11:39:05.791204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.074 [2024-11-19 11:39:05.791262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.074 [2024-11-19 11:39:05.791276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.074 [2024-11-19 11:39:05.791283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.074 [2024-11-19 11:39:05.791289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.074 [2024-11-19 11:39:05.791303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.074 qpair failed and we were unable to recover it. 00:27:52.074 [2024-11-19 11:39:05.801242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.074 [2024-11-19 11:39:05.801297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.074 [2024-11-19 11:39:05.801311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.074 [2024-11-19 11:39:05.801317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.074 [2024-11-19 11:39:05.801323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.074 [2024-11-19 11:39:05.801337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.074 qpair failed and we were unable to recover it. 00:27:52.074 [2024-11-19 11:39:05.811366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.074 [2024-11-19 11:39:05.811428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.074 [2024-11-19 11:39:05.811442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.074 [2024-11-19 11:39:05.811448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.074 [2024-11-19 11:39:05.811454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.074 [2024-11-19 11:39:05.811469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.074 qpair failed and we were unable to recover it. 00:27:52.074 [2024-11-19 11:39:05.821326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.074 [2024-11-19 11:39:05.821383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.074 [2024-11-19 11:39:05.821398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.074 [2024-11-19 11:39:05.821404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.074 [2024-11-19 11:39:05.821410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.074 [2024-11-19 11:39:05.821425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.074 qpair failed and we were unable to recover it. 00:27:52.074 [2024-11-19 11:39:05.831319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.074 [2024-11-19 11:39:05.831379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.074 [2024-11-19 11:39:05.831393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.074 [2024-11-19 11:39:05.831403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.074 [2024-11-19 11:39:05.831409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.074 [2024-11-19 11:39:05.831424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.074 qpair failed and we were unable to recover it. 00:27:52.074 [2024-11-19 11:39:05.841445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.074 [2024-11-19 11:39:05.841536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.074 [2024-11-19 11:39:05.841550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.074 [2024-11-19 11:39:05.841556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.074 [2024-11-19 11:39:05.841562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.074 [2024-11-19 11:39:05.841576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.074 qpair failed and we were unable to recover it. 00:27:52.336 [2024-11-19 11:39:05.851388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.336 [2024-11-19 11:39:05.851440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.336 [2024-11-19 11:39:05.851455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.336 [2024-11-19 11:39:05.851462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.336 [2024-11-19 11:39:05.851468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.336 [2024-11-19 11:39:05.851482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.336 qpair failed and we were unable to recover it. 00:27:52.336 [2024-11-19 11:39:05.861502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.336 [2024-11-19 11:39:05.861556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.336 [2024-11-19 11:39:05.861571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.336 [2024-11-19 11:39:05.861577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.336 [2024-11-19 11:39:05.861583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.336 [2024-11-19 11:39:05.861598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.336 qpair failed and we were unable to recover it. 00:27:52.336 [2024-11-19 11:39:05.871508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.336 [2024-11-19 11:39:05.871563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.336 [2024-11-19 11:39:05.871577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.336 [2024-11-19 11:39:05.871584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.336 [2024-11-19 11:39:05.871590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.336 [2024-11-19 11:39:05.871608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.336 qpair failed and we were unable to recover it. 00:27:52.336 [2024-11-19 11:39:05.881552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.336 [2024-11-19 11:39:05.881610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.336 [2024-11-19 11:39:05.881624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.336 [2024-11-19 11:39:05.881631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.336 [2024-11-19 11:39:05.881636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.336 [2024-11-19 11:39:05.881651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.336 qpair failed and we were unable to recover it. 00:27:52.336 [2024-11-19 11:39:05.891570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.336 [2024-11-19 11:39:05.891640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.336 [2024-11-19 11:39:05.891654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.336 [2024-11-19 11:39:05.891661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.336 [2024-11-19 11:39:05.891666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.336 [2024-11-19 11:39:05.891681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.336 qpair failed and we were unable to recover it. 00:27:52.336 [2024-11-19 11:39:05.901610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.336 [2024-11-19 11:39:05.901691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.336 [2024-11-19 11:39:05.901706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.336 [2024-11-19 11:39:05.901712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.336 [2024-11-19 11:39:05.901718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.336 [2024-11-19 11:39:05.901732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.336 qpair failed and we were unable to recover it. 00:27:52.336 [2024-11-19 11:39:05.911640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.336 [2024-11-19 11:39:05.911688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.336 [2024-11-19 11:39:05.911702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.336 [2024-11-19 11:39:05.911708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.336 [2024-11-19 11:39:05.911715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.336 [2024-11-19 11:39:05.911729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.336 qpair failed and we were unable to recover it. 00:27:52.336 [2024-11-19 11:39:05.921659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.336 [2024-11-19 11:39:05.921715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.336 [2024-11-19 11:39:05.921730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.336 [2024-11-19 11:39:05.921736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.336 [2024-11-19 11:39:05.921742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.336 [2024-11-19 11:39:05.921756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.336 qpair failed and we were unable to recover it. 00:27:52.336 [2024-11-19 11:39:05.931740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.336 [2024-11-19 11:39:05.931791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.336 [2024-11-19 11:39:05.931805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.336 [2024-11-19 11:39:05.931811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.336 [2024-11-19 11:39:05.931817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.336 [2024-11-19 11:39:05.931831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.336 qpair failed and we were unable to recover it. 00:27:52.336 [2024-11-19 11:39:05.941679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.336 [2024-11-19 11:39:05.941728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.336 [2024-11-19 11:39:05.941742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:05.941748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:05.941754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:05.941768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:05.951756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:05.951846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:05.951860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:05.951867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:05.951873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:05.951887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:05.961771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:05.961828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:05.961845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:05.961854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:05.961860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:05.961877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:05.971878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:05.971934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:05.971953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:05.971960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:05.971966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:05.971981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:05.981859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:05.981914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:05.981927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:05.981935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:05.981942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:05.981961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:05.991808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:05.991889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:05.991904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:05.991910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:05.991916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:05.991930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:06.001918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:06.001983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:06.001997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:06.002004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:06.002009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:06.002027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:06.011904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:06.011964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:06.011978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:06.011985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:06.011991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:06.012005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:06.021993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:06.022046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:06.022060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:06.022067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:06.022073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:06.022088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:06.031976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:06.032032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:06.032046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:06.032052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:06.032058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:06.032073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:06.042021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:06.042082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:06.042096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:06.042103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:06.042109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:06.042123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:06.052005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:06.052066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:06.052080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:06.052086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:06.052092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:06.052106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:06.062108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:06.062166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:06.062181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.337 [2024-11-19 11:39:06.062188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.337 [2024-11-19 11:39:06.062194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.337 [2024-11-19 11:39:06.062208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.337 qpair failed and we were unable to recover it. 00:27:52.337 [2024-11-19 11:39:06.072033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.337 [2024-11-19 11:39:06.072086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.337 [2024-11-19 11:39:06.072101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.338 [2024-11-19 11:39:06.072109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.338 [2024-11-19 11:39:06.072115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.338 [2024-11-19 11:39:06.072130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.338 qpair failed and we were unable to recover it. 00:27:52.338 [2024-11-19 11:39:06.082061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.338 [2024-11-19 11:39:06.082121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.338 [2024-11-19 11:39:06.082135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.338 [2024-11-19 11:39:06.082142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.338 [2024-11-19 11:39:06.082148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.338 [2024-11-19 11:39:06.082162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.338 qpair failed and we were unable to recover it. 00:27:52.338 [2024-11-19 11:39:06.092084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.338 [2024-11-19 11:39:06.092137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.338 [2024-11-19 11:39:06.092151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.338 [2024-11-19 11:39:06.092162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.338 [2024-11-19 11:39:06.092168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.338 [2024-11-19 11:39:06.092182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.338 qpair failed and we were unable to recover it. 00:27:52.338 [2024-11-19 11:39:06.102122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.338 [2024-11-19 11:39:06.102178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.338 [2024-11-19 11:39:06.102192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.338 [2024-11-19 11:39:06.102199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.338 [2024-11-19 11:39:06.102205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.338 [2024-11-19 11:39:06.102219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.338 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.112178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.112280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.112294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.112301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.112307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.112322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.600 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.122248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.122303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.122317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.122324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.122330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.122349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.600 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.132198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.132262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.132277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.132283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.132289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.132308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.600 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.142269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.142370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.142385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.142391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.142397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.142412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.600 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.152263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.152345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.152359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.152365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.152371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.152385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.600 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.162312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.162375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.162389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.162396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.162402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.162418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.600 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.172324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.172419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.172433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.172440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.172446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.172461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.600 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.182377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.182438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.182453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.182459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.182465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.182480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.600 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.192445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.192497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.192511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.192518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.192524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.192539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.600 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.202407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.202467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.202482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.202488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.202494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.202509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.600 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.212501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.212556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.212571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.212577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.212583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.212598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.600 qpair failed and we were unable to recover it. 00:27:52.600 [2024-11-19 11:39:06.222563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.600 [2024-11-19 11:39:06.222617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.600 [2024-11-19 11:39:06.222631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.600 [2024-11-19 11:39:06.222642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.600 [2024-11-19 11:39:06.222648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.600 [2024-11-19 11:39:06.222663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.232543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.232593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.232608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.232614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.232621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.232635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.242562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.242625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.242640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.242647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.242653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.242667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.252683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.252756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.252770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.252777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.252783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.252797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.262674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.262729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.262744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.262750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.262757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.262775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.272701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.272753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.272768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.272774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.272780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.272794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.282704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.282760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.282776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.282783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.282790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.282805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.292722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.292777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.292791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.292798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.292804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.292819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.302778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.302841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.302856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.302863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.302870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.302884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.312775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.312829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.312843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.312850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.312856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.312870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.322834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.322895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.322910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.322917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.322923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.322938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.332838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.332890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.332905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.332912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.332918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.332933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.342857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.342912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.342926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.342932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.342938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.342958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.601 qpair failed and we were unable to recover it. 00:27:52.601 [2024-11-19 11:39:06.352903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.601 [2024-11-19 11:39:06.352963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.601 [2024-11-19 11:39:06.352978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.601 [2024-11-19 11:39:06.352991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.601 [2024-11-19 11:39:06.352997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.601 [2024-11-19 11:39:06.353012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.602 qpair failed and we were unable to recover it. 00:27:52.602 [2024-11-19 11:39:06.362940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.602 [2024-11-19 11:39:06.363005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.602 [2024-11-19 11:39:06.363021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.602 [2024-11-19 11:39:06.363028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.602 [2024-11-19 11:39:06.363034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.602 [2024-11-19 11:39:06.363049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.602 qpair failed and we were unable to recover it. 00:27:52.602 [2024-11-19 11:39:06.372928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.602 [2024-11-19 11:39:06.372985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.602 [2024-11-19 11:39:06.372999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.602 [2024-11-19 11:39:06.373006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.602 [2024-11-19 11:39:06.373012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.602 [2024-11-19 11:39:06.373027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.602 qpair failed and we were unable to recover it. 00:27:52.925 [2024-11-19 11:39:06.382994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.925 [2024-11-19 11:39:06.383049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.925 [2024-11-19 11:39:06.383064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.925 [2024-11-19 11:39:06.383070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.925 [2024-11-19 11:39:06.383077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.925 [2024-11-19 11:39:06.383091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.925 qpair failed and we were unable to recover it. 00:27:52.925 [2024-11-19 11:39:06.393043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.925 [2024-11-19 11:39:06.393122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.925 [2024-11-19 11:39:06.393136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.925 [2024-11-19 11:39:06.393143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.925 [2024-11-19 11:39:06.393149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.925 [2024-11-19 11:39:06.393168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.925 qpair failed and we were unable to recover it. 00:27:52.925 [2024-11-19 11:39:06.403097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.925 [2024-11-19 11:39:06.403193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.925 [2024-11-19 11:39:06.403208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.925 [2024-11-19 11:39:06.403215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.925 [2024-11-19 11:39:06.403220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.925 [2024-11-19 11:39:06.403234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.925 qpair failed and we were unable to recover it. 00:27:52.925 [2024-11-19 11:39:06.413118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.925 [2024-11-19 11:39:06.413178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.925 [2024-11-19 11:39:06.413193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.925 [2024-11-19 11:39:06.413200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.925 [2024-11-19 11:39:06.413206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.925 [2024-11-19 11:39:06.413221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.925 qpair failed and we were unable to recover it. 00:27:52.925 [2024-11-19 11:39:06.423093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.925 [2024-11-19 11:39:06.423160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.925 [2024-11-19 11:39:06.423175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.925 [2024-11-19 11:39:06.423181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.925 [2024-11-19 11:39:06.423187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.925 [2024-11-19 11:39:06.423202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.925 qpair failed and we were unable to recover it. 00:27:52.925 [2024-11-19 11:39:06.433183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.925 [2024-11-19 11:39:06.433274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.925 [2024-11-19 11:39:06.433287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.925 [2024-11-19 11:39:06.433294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.925 [2024-11-19 11:39:06.433300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.925 [2024-11-19 11:39:06.433314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.925 qpair failed and we were unable to recover it. 00:27:52.925 [2024-11-19 11:39:06.443256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.925 [2024-11-19 11:39:06.443317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.925 [2024-11-19 11:39:06.443332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.925 [2024-11-19 11:39:06.443339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.925 [2024-11-19 11:39:06.443345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.925 [2024-11-19 11:39:06.443359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.925 qpair failed and we were unable to recover it. 00:27:52.925 [2024-11-19 11:39:06.453149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.925 [2024-11-19 11:39:06.453203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.925 [2024-11-19 11:39:06.453217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.925 [2024-11-19 11:39:06.453223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.925 [2024-11-19 11:39:06.453229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.925 [2024-11-19 11:39:06.453244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.925 qpair failed and we were unable to recover it. 00:27:52.925 [2024-11-19 11:39:06.463218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.925 [2024-11-19 11:39:06.463312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.925 [2024-11-19 11:39:06.463326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.925 [2024-11-19 11:39:06.463333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.925 [2024-11-19 11:39:06.463338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.925 [2024-11-19 11:39:06.463353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.925 qpair failed and we were unable to recover it. 00:27:52.925 [2024-11-19 11:39:06.473202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.925 [2024-11-19 11:39:06.473259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.925 [2024-11-19 11:39:06.473274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.925 [2024-11-19 11:39:06.473280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.925 [2024-11-19 11:39:06.473287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.925 [2024-11-19 11:39:06.473302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.925 qpair failed and we were unable to recover it. 00:27:52.925 [2024-11-19 11:39:06.483245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.925 [2024-11-19 11:39:06.483318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.925 [2024-11-19 11:39:06.483332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.925 [2024-11-19 11:39:06.483342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.925 [2024-11-19 11:39:06.483348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.925 [2024-11-19 11:39:06.483363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.925 qpair failed and we were unable to recover it. 00:27:52.926 [2024-11-19 11:39:06.493234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.926 [2024-11-19 11:39:06.493288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.926 [2024-11-19 11:39:06.493302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.926 [2024-11-19 11:39:06.493309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.926 [2024-11-19 11:39:06.493315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.926 [2024-11-19 11:39:06.493328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.926 qpair failed and we were unable to recover it. 00:27:52.926 [2024-11-19 11:39:06.503339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.926 [2024-11-19 11:39:06.503399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.926 [2024-11-19 11:39:06.503413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.926 [2024-11-19 11:39:06.503421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.926 [2024-11-19 11:39:06.503426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xadaba0 00:27:52.926 [2024-11-19 11:39:06.503441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.926 qpair failed and we were unable to recover it. 00:27:52.926 [2024-11-19 11:39:06.513420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.926 [2024-11-19 11:39:06.513521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.926 [2024-11-19 11:39:06.513578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.926 [2024-11-19 11:39:06.513604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.926 [2024-11-19 11:39:06.513625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5068000b90 00:27:52.926 [2024-11-19 11:39:06.513677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:52.926 qpair failed and we were unable to recover it. 00:27:52.926 [2024-11-19 11:39:06.523434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.926 [2024-11-19 11:39:06.523513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.926 [2024-11-19 11:39:06.523542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.926 [2024-11-19 11:39:06.523557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.926 [2024-11-19 11:39:06.523571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5068000b90 00:27:52.926 [2024-11-19 11:39:06.523609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:52.926 qpair failed and we were unable to recover it. 00:27:52.926 [2024-11-19 11:39:06.533426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.926 [2024-11-19 11:39:06.533524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.926 [2024-11-19 11:39:06.533581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.926 [2024-11-19 11:39:06.533607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.926 [2024-11-19 11:39:06.533629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5064000b90 00:27:52.926 [2024-11-19 11:39:06.533678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.926 qpair failed and we were unable to recover it. 00:27:52.926 [2024-11-19 11:39:06.543456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.926 [2024-11-19 11:39:06.543538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.926 [2024-11-19 11:39:06.543567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.926 [2024-11-19 11:39:06.543582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.926 [2024-11-19 11:39:06.543595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5064000b90 00:27:52.926 [2024-11-19 11:39:06.543628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.926 qpair failed and we were unable to recover it. 00:27:52.926 [2024-11-19 11:39:06.543729] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:52.926 A controller has encountered a failure and is being reset. 00:27:53.206 Controller properly reset. 00:27:53.206 Initializing NVMe Controllers 00:27:53.206 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.206 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:53.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:53.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:53.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:53.206 Initialization complete. Launching workers. 00:27:53.206 Starting thread on core 1 00:27:53.206 Starting thread on core 2 00:27:53.206 Starting thread on core 3 00:27:53.206 Starting thread on core 0 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:53.206 00:27:53.206 real 0m10.832s 00:27:53.206 user 0m19.686s 00:27:53.206 sys 0m4.722s 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.206 ************************************ 00:27:53.206 END TEST nvmf_target_disconnect_tc2 00:27:53.206 ************************************ 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:53.206 rmmod nvme_tcp 00:27:53.206 rmmod nvme_fabrics 00:27:53.206 rmmod nvme_keyring 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2422171 ']' 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2422171 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2422171 ']' 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2422171 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:53.206 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.207 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2422171 00:27:53.207 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:53.207 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:53.207 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2422171' 00:27:53.207 killing process with pid 2422171 00:27:53.207 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2422171 00:27:53.207 11:39:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2422171 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.467 11:39:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.374 11:39:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:55.374 00:27:55.374 real 0m19.559s 00:27:55.374 user 0m47.456s 00:27:55.374 sys 0m9.659s 00:27:55.374 11:39:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.374 11:39:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:55.374 ************************************ 00:27:55.374 END TEST nvmf_target_disconnect 00:27:55.374 ************************************ 00:27:55.374 11:39:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:55.374 00:27:55.374 real 5m50.097s 00:27:55.374 user 10m28.566s 00:27:55.374 sys 1m58.331s 00:27:55.374 11:39:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.633 11:39:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.633 ************************************ 00:27:55.633 END TEST nvmf_host 00:27:55.633 ************************************ 00:27:55.633 11:39:09 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:55.633 11:39:09 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:55.633 11:39:09 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:55.633 11:39:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:55.633 11:39:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.633 11:39:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.633 ************************************ 00:27:55.633 START TEST nvmf_target_core_interrupt_mode 00:27:55.633 ************************************ 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:55.633 * Looking for test storage... 00:27:55.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:55.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.633 --rc genhtml_branch_coverage=1 00:27:55.633 --rc genhtml_function_coverage=1 00:27:55.633 --rc genhtml_legend=1 00:27:55.633 --rc geninfo_all_blocks=1 00:27:55.633 --rc geninfo_unexecuted_blocks=1 00:27:55.633 00:27:55.633 ' 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:55.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.633 --rc genhtml_branch_coverage=1 00:27:55.633 --rc genhtml_function_coverage=1 00:27:55.633 --rc genhtml_legend=1 00:27:55.633 --rc geninfo_all_blocks=1 00:27:55.633 --rc geninfo_unexecuted_blocks=1 00:27:55.633 00:27:55.633 ' 00:27:55.633 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:55.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.633 --rc genhtml_branch_coverage=1 00:27:55.634 --rc genhtml_function_coverage=1 00:27:55.634 --rc genhtml_legend=1 00:27:55.634 --rc geninfo_all_blocks=1 00:27:55.634 --rc geninfo_unexecuted_blocks=1 00:27:55.634 00:27:55.634 ' 00:27:55.634 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.634 --rc genhtml_branch_coverage=1 00:27:55.634 --rc genhtml_function_coverage=1 00:27:55.634 --rc genhtml_legend=1 00:27:55.634 --rc geninfo_all_blocks=1 00:27:55.634 --rc geninfo_unexecuted_blocks=1 00:27:55.634 00:27:55.634 ' 00:27:55.634 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:55.895 ************************************ 00:27:55.895 START TEST nvmf_abort 00:27:55.895 ************************************ 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:55.895 * Looking for test storage... 00:27:55.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:55.895 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:55.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.896 --rc genhtml_branch_coverage=1 00:27:55.896 --rc genhtml_function_coverage=1 00:27:55.896 --rc genhtml_legend=1 00:27:55.896 --rc geninfo_all_blocks=1 00:27:55.896 --rc geninfo_unexecuted_blocks=1 00:27:55.896 00:27:55.896 ' 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:55.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.896 --rc genhtml_branch_coverage=1 00:27:55.896 --rc genhtml_function_coverage=1 00:27:55.896 --rc genhtml_legend=1 00:27:55.896 --rc geninfo_all_blocks=1 00:27:55.896 --rc geninfo_unexecuted_blocks=1 00:27:55.896 00:27:55.896 ' 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:55.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.896 --rc genhtml_branch_coverage=1 00:27:55.896 --rc genhtml_function_coverage=1 00:27:55.896 --rc genhtml_legend=1 00:27:55.896 --rc geninfo_all_blocks=1 00:27:55.896 --rc geninfo_unexecuted_blocks=1 00:27:55.896 00:27:55.896 ' 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:55.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.896 --rc genhtml_branch_coverage=1 00:27:55.896 --rc genhtml_function_coverage=1 00:27:55.896 --rc genhtml_legend=1 00:27:55.896 --rc geninfo_all_blocks=1 00:27:55.896 --rc geninfo_unexecuted_blocks=1 00:27:55.896 00:27:55.896 ' 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.896 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.156 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.156 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.156 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.157 11:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:02.734 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:02.734 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:02.734 Found net devices under 0000:86:00.0: cvl_0_0 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:02.734 Found net devices under 0000:86:00.1: cvl_0_1 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:02.734 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:02.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:28:02.735 00:28:02.735 --- 10.0.0.2 ping statistics --- 00:28:02.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.735 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:28:02.735 00:28:02.735 --- 10.0.0.1 ping statistics --- 00:28:02.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.735 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2427410 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2427410 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2427410 ']' 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.735 [2024-11-19 11:39:15.648753] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:02.735 [2024-11-19 11:39:15.649759] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:28:02.735 [2024-11-19 11:39:15.649800] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.735 [2024-11-19 11:39:15.729207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:02.735 [2024-11-19 11:39:15.771311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.735 [2024-11-19 11:39:15.771346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.735 [2024-11-19 11:39:15.771353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.735 [2024-11-19 11:39:15.771359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.735 [2024-11-19 11:39:15.771364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.735 [2024-11-19 11:39:15.772668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.735 [2024-11-19 11:39:15.772780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.735 [2024-11-19 11:39:15.772780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.735 [2024-11-19 11:39:15.838869] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:02.735 [2024-11-19 11:39:15.839777] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:02.735 [2024-11-19 11:39:15.839883] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:02.735 [2024-11-19 11:39:15.840052] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.735 [2024-11-19 11:39:15.901577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.735 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.736 Malloc0 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.736 Delay0 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.736 [2024-11-19 11:39:15.985591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.736 11:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.736 11:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.736 11:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:02.736 [2024-11-19 11:39:16.114109] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:04.642 Initializing NVMe Controllers 00:28:04.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:04.642 controller IO queue size 128 less than required 00:28:04.642 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:04.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:04.642 Initialization complete. Launching workers. 00:28:04.642 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36995 00:28:04.642 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37052, failed to submit 66 00:28:04.642 success 36995, unsuccessful 57, failed 0 00:28:04.642 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:04.643 rmmod nvme_tcp 00:28:04.643 rmmod nvme_fabrics 00:28:04.643 rmmod nvme_keyring 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2427410 ']' 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2427410 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2427410 ']' 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2427410 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2427410 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2427410' 00:28:04.643 killing process with pid 2427410 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2427410 00:28:04.643 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2427410 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.903 11:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.810 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:06.810 00:28:06.810 real 0m11.036s 00:28:06.810 user 0m10.153s 00:28:06.810 sys 0m5.658s 00:28:06.810 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.810 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 ************************************ 00:28:06.810 END TEST nvmf_abort 00:28:06.810 ************************************ 00:28:06.810 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:06.810 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:06.810 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.810 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 ************************************ 00:28:06.810 START TEST nvmf_ns_hotplug_stress 00:28:06.810 ************************************ 00:28:06.810 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:07.070 * Looking for test storage... 00:28:07.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:07.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.070 --rc genhtml_branch_coverage=1 00:28:07.070 --rc genhtml_function_coverage=1 00:28:07.070 --rc genhtml_legend=1 00:28:07.070 --rc geninfo_all_blocks=1 00:28:07.070 --rc geninfo_unexecuted_blocks=1 00:28:07.070 00:28:07.070 ' 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:07.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.070 --rc genhtml_branch_coverage=1 00:28:07.070 --rc genhtml_function_coverage=1 00:28:07.070 --rc genhtml_legend=1 00:28:07.070 --rc geninfo_all_blocks=1 00:28:07.070 --rc geninfo_unexecuted_blocks=1 00:28:07.070 00:28:07.070 ' 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:07.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.070 --rc genhtml_branch_coverage=1 00:28:07.070 --rc genhtml_function_coverage=1 00:28:07.070 --rc genhtml_legend=1 00:28:07.070 --rc geninfo_all_blocks=1 00:28:07.070 --rc geninfo_unexecuted_blocks=1 00:28:07.070 00:28:07.070 ' 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:07.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.070 --rc genhtml_branch_coverage=1 00:28:07.070 --rc genhtml_function_coverage=1 00:28:07.070 --rc genhtml_legend=1 00:28:07.070 --rc geninfo_all_blocks=1 00:28:07.070 --rc geninfo_unexecuted_blocks=1 00:28:07.070 00:28:07.070 ' 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.070 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.071 11:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:13.645 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:13.645 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.645 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:13.646 Found net devices under 0000:86:00.0: cvl_0_0 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:13.646 Found net devices under 0000:86:00.1: cvl_0_1 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:13.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:28:13.646 00:28:13.646 --- 10.0.0.2 ping statistics --- 00:28:13.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.646 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:28:13.646 00:28:13.646 --- 10.0.0.1 ping statistics --- 00:28:13.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.646 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2431311 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2431311 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2431311 ']' 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:13.646 [2024-11-19 11:39:26.728641] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:13.646 [2024-11-19 11:39:26.729569] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:28:13.646 [2024-11-19 11:39:26.729603] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.646 [2024-11-19 11:39:26.810096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:13.646 [2024-11-19 11:39:26.851647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.646 [2024-11-19 11:39:26.851684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.646 [2024-11-19 11:39:26.851691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.646 [2024-11-19 11:39:26.851697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.646 [2024-11-19 11:39:26.851702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.646 [2024-11-19 11:39:26.853133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.646 [2024-11-19 11:39:26.853243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.646 [2024-11-19 11:39:26.853243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.646 [2024-11-19 11:39:26.919698] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:13.646 [2024-11-19 11:39:26.920484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:13.646 [2024-11-19 11:39:26.920810] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:13.646 [2024-11-19 11:39:26.920958] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:13.646 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:13.647 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.647 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:13.647 11:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:13.647 [2024-11-19 11:39:27.158001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.647 11:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:13.647 11:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.906 [2024-11-19 11:39:27.562438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.907 11:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:14.166 11:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:14.426 Malloc0 00:28:14.426 11:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:14.426 Delay0 00:28:14.426 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.685 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:14.945 NULL1 00:28:14.945 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:15.204 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2431667 00:28:15.204 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:15.204 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:15.204 11:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.581 Read completed with error (sct=0, sc=11) 00:28:16.581 11:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.581 11:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:16.581 11:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:16.841 true 00:28:16.841 11:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:16.841 11:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.779 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.779 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:17.779 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:18.039 true 00:28:18.039 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:18.039 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.298 11:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.298 11:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:18.298 11:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:18.557 true 00:28:18.557 11:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:18.557 11:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:19.938 11:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.938 11:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:19.938 11:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:19.938 true 00:28:19.938 11:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:19.938 11:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.196 11:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.456 11:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:20.456 11:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:20.715 true 00:28:20.715 11:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:20.715 11:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:21.652 11:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:21.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:21.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:21.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:21.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:21.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:21.911 11:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:21.911 11:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:22.171 true 00:28:22.171 11:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:22.171 11:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.109 11:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.109 11:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:23.109 11:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:23.368 true 00:28:23.368 11:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:23.368 11:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.627 11:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.887 11:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:23.887 11:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:23.887 true 00:28:24.146 11:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:24.146 11:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.086 11:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.345 11:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:25.345 11:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:25.345 true 00:28:25.345 11:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:25.345 11:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.604 11:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.862 11:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:25.862 11:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:26.121 true 00:28:26.121 11:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:26.121 11:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.059 11:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.318 11:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:27.318 11:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:27.577 true 00:28:27.577 11:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:27.577 11:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.514 11:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.514 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:28.514 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:28.774 true 00:28:28.774 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:28.774 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.033 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.292 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:29.292 11:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:29.292 true 00:28:29.292 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:29.292 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.227 11:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.486 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:30.486 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:30.745 true 00:28:30.745 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:30.745 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.004 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.263 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:31.263 11:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:31.263 true 00:28:31.263 11:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:31.263 11:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.642 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.642 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:32.642 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:32.901 true 00:28:32.901 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:32.901 11:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.839 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.098 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:34.098 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:34.098 true 00:28:34.098 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:34.098 11:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.357 11:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.615 11:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:34.615 11:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:34.874 true 00:28:34.874 11:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:34.874 11:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:35.810 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.068 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:36.068 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:36.327 true 00:28:36.327 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:36.327 11:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.264 11:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.264 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:37.264 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:37.524 true 00:28:37.524 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:37.524 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.783 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.042 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:38.042 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:38.042 true 00:28:38.042 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:38.042 11:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.421 11:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.421 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:39.421 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:39.680 true 00:28:39.680 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:39.680 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.939 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.198 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:40.198 11:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:40.458 true 00:28:40.458 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:40.458 11:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.395 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.654 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:41.654 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:41.914 true 00:28:41.914 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:41.914 11:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.853 11:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:42.853 11:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:42.853 11:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:43.113 true 00:28:43.113 11:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:43.113 11:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.372 11:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.372 11:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:43.372 11:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:43.631 true 00:28:43.631 11:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:43.631 11:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.570 11:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.830 11:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:44.830 11:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:45.089 true 00:28:45.089 11:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:45.089 11:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.349 11:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.349 Initializing NVMe Controllers 00:28:45.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.349 Controller IO queue size 128, less than required. 00:28:45.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.349 Controller IO queue size 128, less than required. 00:28:45.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:45.349 Initialization complete. Launching workers. 00:28:45.349 ======================================================== 00:28:45.349 Latency(us) 00:28:45.349 Device Information : IOPS MiB/s Average min max 00:28:45.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1649.53 0.81 50184.78 2993.57 1038028.48 00:28:45.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16792.44 8.20 7622.04 2004.19 385211.13 00:28:45.349 ======================================================== 00:28:45.349 Total : 18441.97 9.00 11429.04 2004.19 1038028.48 00:28:45.349 00:28:45.607 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:45.607 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:45.607 true 00:28:45.607 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2431667 00:28:45.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2431667) - No such process 00:28:45.607 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2431667 00:28:45.607 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.865 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:46.125 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:46.125 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:46.125 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:46.125 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.125 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:46.384 null0 00:28:46.384 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.384 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.384 11:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:46.384 null1 00:28:46.384 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.384 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.384 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:46.644 null2 00:28:46.644 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.644 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.644 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:46.903 null3 00:28:46.903 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:46.903 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.903 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:47.162 null4 00:28:47.162 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.162 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.162 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:47.162 null5 00:28:47.162 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.162 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.162 11:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:47.421 null6 00:28:47.421 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.421 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.421 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:47.680 null7 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:47.680 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2436990 2436991 2436993 2436996 2436997 2436999 2437001 2437002 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.681 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:47.941 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:47.941 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:47.941 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.941 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:47.941 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:47.941 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:47.941 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:47.941 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:47.941 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.942 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.942 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:47.942 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:47.942 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:47.942 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:48.211 11:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.559 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:48.845 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.846 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:49.104 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:49.105 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:49.105 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.105 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:49.105 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:49.105 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:49.105 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:49.105 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.363 11:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:49.363 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.363 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.363 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.363 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:49.363 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.363 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:49.363 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.363 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.363 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:49.363 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.364 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.364 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:49.622 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:49.622 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.622 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:49.622 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:49.622 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:49.622 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:49.622 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:49.622 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:49.882 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.142 11:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:50.402 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:50.402 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:50.402 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:50.402 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:50.402 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:50.402 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.402 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:50.402 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:50.660 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.660 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.661 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:50.920 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.921 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.921 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:50.921 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.921 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.921 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:50.921 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.921 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.921 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:51.180 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:51.180 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:51.180 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:51.180 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:51.180 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:51.180 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.180 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:51.180 11:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.440 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:51.699 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:51.699 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:51.699 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:51.699 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:51.699 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:51.699 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.699 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:51.699 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.958 rmmod nvme_tcp 00:28:51.958 rmmod nvme_fabrics 00:28:51.958 rmmod nvme_keyring 00:28:51.958 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2431311 ']' 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2431311 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2431311 ']' 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2431311 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431311 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431311' 00:28:51.959 killing process with pid 2431311 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2431311 00:28:51.959 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2431311 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.218 11:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.123 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:54.123 00:28:54.123 real 0m47.324s 00:28:54.123 user 2m56.850s 00:28:54.123 sys 0m19.858s 00:28:54.383 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:54.383 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:54.383 ************************************ 00:28:54.383 END TEST nvmf_ns_hotplug_stress 00:28:54.383 ************************************ 00:28:54.383 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:54.383 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:54.383 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:54.383 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:54.383 ************************************ 00:28:54.383 START TEST nvmf_delete_subsystem 00:28:54.383 ************************************ 00:28:54.383 11:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:54.383 * Looking for test storage... 00:28:54.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:54.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.383 --rc genhtml_branch_coverage=1 00:28:54.383 --rc genhtml_function_coverage=1 00:28:54.383 --rc genhtml_legend=1 00:28:54.383 --rc geninfo_all_blocks=1 00:28:54.383 --rc geninfo_unexecuted_blocks=1 00:28:54.383 00:28:54.383 ' 00:28:54.383 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:54.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.383 --rc genhtml_branch_coverage=1 00:28:54.383 --rc genhtml_function_coverage=1 00:28:54.383 --rc genhtml_legend=1 00:28:54.384 --rc geninfo_all_blocks=1 00:28:54.384 --rc geninfo_unexecuted_blocks=1 00:28:54.384 00:28:54.384 ' 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:54.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.384 --rc genhtml_branch_coverage=1 00:28:54.384 --rc genhtml_function_coverage=1 00:28:54.384 --rc genhtml_legend=1 00:28:54.384 --rc geninfo_all_blocks=1 00:28:54.384 --rc geninfo_unexecuted_blocks=1 00:28:54.384 00:28:54.384 ' 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:54.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.384 --rc genhtml_branch_coverage=1 00:28:54.384 --rc genhtml_function_coverage=1 00:28:54.384 --rc genhtml_legend=1 00:28:54.384 --rc geninfo_all_blocks=1 00:28:54.384 --rc geninfo_unexecuted_blocks=1 00:28:54.384 00:28:54.384 ' 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.384 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:54.644 11:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:01.222 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:01.222 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:01.222 Found net devices under 0000:86:00.0: cvl_0_0 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:01.222 Found net devices under 0000:86:00.1: cvl_0_1 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:01.222 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:01.223 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.223 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.223 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.223 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.223 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:01.223 11:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:01.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:29:01.223 00:29:01.223 --- 10.0.0.2 ping statistics --- 00:29:01.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.223 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:29:01.223 00:29:01.223 --- 10.0.0.1 ping statistics --- 00:29:01.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.223 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2441369 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2441369 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2441369 ']' 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.223 [2024-11-19 11:40:14.160538] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:01.223 [2024-11-19 11:40:14.161547] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:01.223 [2024-11-19 11:40:14.161586] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.223 [2024-11-19 11:40:14.240282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:01.223 [2024-11-19 11:40:14.282360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.223 [2024-11-19 11:40:14.282397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.223 [2024-11-19 11:40:14.282405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.223 [2024-11-19 11:40:14.282410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.223 [2024-11-19 11:40:14.282416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.223 [2024-11-19 11:40:14.283557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.223 [2024-11-19 11:40:14.283559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.223 [2024-11-19 11:40:14.350977] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:01.223 [2024-11-19 11:40:14.351522] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:01.223 [2024-11-19 11:40:14.351688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.223 [2024-11-19 11:40:14.416419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.223 [2024-11-19 11:40:14.440660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.223 NULL1 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.223 Delay0 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2441396 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:01.223 11:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:01.224 [2024-11-19 11:40:14.557331] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:03.129 11:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:03.129 11:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.129 11:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 [2024-11-19 11:40:16.756888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d964a0 is same with the state(6) to be set 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 starting I/O failed: -6 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 [2024-11-19 11:40:16.759620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd52c00d350 is same with the state(6) to be set 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Read completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.129 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Write completed with error (sct=0, sc=8) 00:29:03.130 Read completed with error (sct=0, sc=8) 00:29:04.066 [2024-11-19 11:40:17.734845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d979a0 is same with the state(6) to be set 00:29:04.066 Read completed with error (sct=0, sc=8) 00:29:04.066 Read completed with error (sct=0, sc=8) 00:29:04.066 Write completed with error (sct=0, sc=8) 00:29:04.066 Write completed with error (sct=0, sc=8) 00:29:04.066 Read completed with error (sct=0, sc=8) 00:29:04.066 Read completed with error (sct=0, sc=8) 00:29:04.066 Read completed with error (sct=0, sc=8) 00:29:04.066 Read completed with error (sct=0, sc=8) 00:29:04.066 Read completed with error (sct=0, sc=8) 00:29:04.066 Read completed with error (sct=0, sc=8) 00:29:04.067 [2024-11-19 11:40:17.754378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd52c00d680 is same with the state(6) to be set 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 [2024-11-19 11:40:17.761864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d96680 is same with the state(6) to be set 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 [2024-11-19 11:40:17.762114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d96860 is same with the state(6) to be set 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Write completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 Read completed with error (sct=0, sc=8) 00:29:04.067 [2024-11-19 11:40:17.762533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d962c0 is same with the state(6) to be set 00:29:04.067 Initializing NVMe Controllers 00:29:04.067 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.067 Controller IO queue size 128, less than required. 00:29:04.067 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:04.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:04.067 Initialization complete. Launching workers. 00:29:04.067 ======================================================== 00:29:04.067 Latency(us) 00:29:04.067 Device Information : IOPS MiB/s Average min max 00:29:04.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.57 0.08 972741.56 2446.78 1047875.01 00:29:04.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 143.69 0.07 925447.89 202.01 1011263.14 00:29:04.067 ======================================================== 00:29:04.067 Total : 308.26 0.15 950696.61 202.01 1047875.01 00:29:04.067 00:29:04.067 [2024-11-19 11:40:17.763091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d979a0 (9): Bad file descriptor 00:29:04.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:04.067 11:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.067 11:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:04.067 11:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2441396 00:29:04.067 11:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2441396 00:29:04.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2441396) - No such process 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2441396 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2441396 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2441396 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.636 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.637 [2024-11-19 11:40:18.296643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2442068 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2442068 00:29:04.637 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:04.637 [2024-11-19 11:40:18.381288] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:05.205 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:05.205 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2442068 00:29:05.205 11:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:05.770 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:05.770 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2442068 00:29:05.770 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:06.338 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:06.338 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2442068 00:29:06.338 11:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:06.597 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:06.597 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2442068 00:29:06.597 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:07.165 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:07.165 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2442068 00:29:07.165 11:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:07.732 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:07.732 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2442068 00:29:07.732 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:07.732 Initializing NVMe Controllers 00:29:07.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.732 Controller IO queue size 128, less than required. 00:29:07.732 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:07.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:07.732 Initialization complete. Launching workers. 00:29:07.732 ======================================================== 00:29:07.732 Latency(us) 00:29:07.732 Device Information : IOPS MiB/s Average min max 00:29:07.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002192.46 1000199.14 1005986.74 00:29:07.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003868.37 1000198.22 1010092.63 00:29:07.733 ======================================================== 00:29:07.733 Total : 256.00 0.12 1003030.42 1000198.22 1010092.63 00:29:07.733 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2442068 00:29:08.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2442068) - No such process 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2442068 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.301 rmmod nvme_tcp 00:29:08.301 rmmod nvme_fabrics 00:29:08.301 rmmod nvme_keyring 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2441369 ']' 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2441369 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2441369 ']' 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2441369 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2441369 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2441369' 00:29:08.301 killing process with pid 2441369 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2441369 00:29:08.301 11:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2441369 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.560 11:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.466 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.466 00:29:10.466 real 0m16.220s 00:29:10.466 user 0m26.128s 00:29:10.466 sys 0m6.270s 00:29:10.466 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.466 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.466 ************************************ 00:29:10.466 END TEST nvmf_delete_subsystem 00:29:10.466 ************************************ 00:29:10.466 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:10.466 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:10.466 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.466 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:10.726 ************************************ 00:29:10.726 START TEST nvmf_host_management 00:29:10.726 ************************************ 00:29:10.726 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:10.726 * Looking for test storage... 00:29:10.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:10.726 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:10.726 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:10.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.727 --rc genhtml_branch_coverage=1 00:29:10.727 --rc genhtml_function_coverage=1 00:29:10.727 --rc genhtml_legend=1 00:29:10.727 --rc geninfo_all_blocks=1 00:29:10.727 --rc geninfo_unexecuted_blocks=1 00:29:10.727 00:29:10.727 ' 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:10.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.727 --rc genhtml_branch_coverage=1 00:29:10.727 --rc genhtml_function_coverage=1 00:29:10.727 --rc genhtml_legend=1 00:29:10.727 --rc geninfo_all_blocks=1 00:29:10.727 --rc geninfo_unexecuted_blocks=1 00:29:10.727 00:29:10.727 ' 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:10.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.727 --rc genhtml_branch_coverage=1 00:29:10.727 --rc genhtml_function_coverage=1 00:29:10.727 --rc genhtml_legend=1 00:29:10.727 --rc geninfo_all_blocks=1 00:29:10.727 --rc geninfo_unexecuted_blocks=1 00:29:10.727 00:29:10.727 ' 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:10.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.727 --rc genhtml_branch_coverage=1 00:29:10.727 --rc genhtml_function_coverage=1 00:29:10.727 --rc genhtml_legend=1 00:29:10.727 --rc geninfo_all_blocks=1 00:29:10.727 --rc geninfo_unexecuted_blocks=1 00:29:10.727 00:29:10.727 ' 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:10.727 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:10.728 11:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:17.303 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:17.303 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:17.303 Found net devices under 0000:86:00.0: cvl_0_0 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:17.303 Found net devices under 0000:86:00.1: cvl_0_1 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.303 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:29:17.304 00:29:17.304 --- 10.0.0.2 ping statistics --- 00:29:17.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.304 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:29:17.304 00:29:17.304 --- 10.0.0.1 ping statistics --- 00:29:17.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.304 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2446078 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2446078 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2446078 ']' 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.304 11:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.304 [2024-11-19 11:40:30.449803] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:17.304 [2024-11-19 11:40:30.450726] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:17.304 [2024-11-19 11:40:30.450758] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.304 [2024-11-19 11:40:30.527210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.304 [2024-11-19 11:40:30.571740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.304 [2024-11-19 11:40:30.571776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.304 [2024-11-19 11:40:30.571784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.304 [2024-11-19 11:40:30.571790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.304 [2024-11-19 11:40:30.571795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.304 [2024-11-19 11:40:30.573427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.304 [2024-11-19 11:40:30.573532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.304 [2024-11-19 11:40:30.573639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.304 [2024-11-19 11:40:30.573639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:17.304 [2024-11-19 11:40:30.642018] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:17.304 [2024-11-19 11:40:30.642832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:17.304 [2024-11-19 11:40:30.643050] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:17.304 [2024-11-19 11:40:30.643458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:17.304 [2024-11-19 11:40:30.643501] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:17.564 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.564 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:17.564 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.564 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.564 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.564 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.564 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:17.564 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.564 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.564 [2024-11-19 11:40:31.334383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.824 Malloc0 00:29:17.824 [2024-11-19 11:40:31.422467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2446342 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2446342 /var/tmp/bdevperf.sock 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2446342 ']' 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:17.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.824 { 00:29:17.824 "params": { 00:29:17.824 "name": "Nvme$subsystem", 00:29:17.824 "trtype": "$TEST_TRANSPORT", 00:29:17.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.824 "adrfam": "ipv4", 00:29:17.824 "trsvcid": "$NVMF_PORT", 00:29:17.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.824 "hdgst": ${hdgst:-false}, 00:29:17.824 "ddgst": ${ddgst:-false} 00:29:17.824 }, 00:29:17.824 "method": "bdev_nvme_attach_controller" 00:29:17.824 } 00:29:17.824 EOF 00:29:17.824 )") 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:17.824 11:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:17.824 "params": { 00:29:17.824 "name": "Nvme0", 00:29:17.824 "trtype": "tcp", 00:29:17.824 "traddr": "10.0.0.2", 00:29:17.824 "adrfam": "ipv4", 00:29:17.824 "trsvcid": "4420", 00:29:17.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:17.824 "hdgst": false, 00:29:17.824 "ddgst": false 00:29:17.824 }, 00:29:17.824 "method": "bdev_nvme_attach_controller" 00:29:17.824 }' 00:29:17.824 [2024-11-19 11:40:31.521172] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:17.824 [2024-11-19 11:40:31.521220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2446342 ] 00:29:17.824 [2024-11-19 11:40:31.598789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.084 [2024-11-19 11:40:31.640991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.084 Running I/O for 10 seconds... 00:29:18.654 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.654 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:18.654 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:18.654 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.654 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.654 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.654 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:18.654 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:18.654 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:18.654 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:18.655 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:18.655 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:18.655 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:18.655 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:18.655 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:18.655 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:18.655 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.655 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.655 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.915 [2024-11-19 11:40:32.454053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eec0 is same with the state(6) to be set 00:29:18.915 [2024-11-19 11:40:32.454094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eec0 is same with the state(6) to be set 00:29:18.915 [2024-11-19 11:40:32.454102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eec0 is same with the state(6) to be set 00:29:18.915 [2024-11-19 11:40:32.454109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eec0 is same with the state(6) to be set 00:29:18.915 [2024-11-19 11:40:32.454115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eec0 is same with the state(6) to be set 00:29:18.915 [2024-11-19 11:40:32.454121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eec0 is same with the state(6) to be set 00:29:18.915 [2024-11-19 11:40:32.454128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eec0 is same with the state(6) to be set 00:29:18.915 [2024-11-19 11:40:32.454134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eec0 is same with the state(6) to be set 00:29:18.915 [2024-11-19 11:40:32.454140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eec0 is same with the state(6) to be set 00:29:18.915 [2024-11-19 11:40:32.454146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eec0 is same with the state(6) to be set 00:29:18.915 [2024-11-19 11:40:32.454152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eec0 is same with the state(6) to be set 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.915 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.915 [2024-11-19 11:40:32.461145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.915 [2024-11-19 11:40:32.461178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.915 [2024-11-19 11:40:32.461196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.915 [2024-11-19 11:40:32.461211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.915 [2024-11-19 11:40:32.461224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b500 is same with the state(6) to be set 00:29:18.915 [2024-11-19 11:40:32.461552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.915 [2024-11-19 11:40:32.461568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.915 [2024-11-19 11:40:32.461589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.915 [2024-11-19 11:40:32.461605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.915 [2024-11-19 11:40:32.461619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.915 [2024-11-19 11:40:32.461635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.915 [2024-11-19 11:40:32.461650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.915 [2024-11-19 11:40:32.461665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.915 [2024-11-19 11:40:32.461680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.915 [2024-11-19 11:40:32.461699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.915 [2024-11-19 11:40:32.461714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.915 [2024-11-19 11:40:32.461722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.461989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.461996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.916 [2024-11-19 11:40:32.462262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.916 [2024-11-19 11:40:32.462273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.917 [2024-11-19 11:40:32.462546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.917 [2024-11-19 11:40:32.462554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa34810 is same with the state(6) to be set 00:29:18.917 [2024-11-19 11:40:32.463504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:18.917 task offset: 24576 on job bdev=Nvme0n1 fails 00:29:18.917 00:29:18.917 Latency(us) 00:29:18.917 [2024-11-19T10:40:32.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.917 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:18.917 Job: Nvme0n1 ended in about 0.63 seconds with error 00:29:18.917 Verification LBA range: start 0x0 length 0x400 00:29:18.917 Nvme0n1 : 0.63 1943.31 121.46 102.28 0.00 30652.20 1780.87 27240.18 00:29:18.917 [2024-11-19T10:40:32.698Z] =================================================================================================================== 00:29:18.917 [2024-11-19T10:40:32.698Z] Total : 1943.31 121.46 102.28 0.00 30652.20 1780.87 27240.18 00:29:18.917 [2024-11-19 11:40:32.465884] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:18.917 [2024-11-19 11:40:32.465903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b500 (9): Bad file descriptor 00:29:18.917 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.917 11:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:18.917 [2024-11-19 11:40:32.508981] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:19.855 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2446342 00:29:19.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2446342) - No such process 00:29:19.855 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:19.855 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:19.855 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:19.855 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:19.856 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:19.856 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:19.856 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:19.856 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:19.856 { 00:29:19.856 "params": { 00:29:19.856 "name": "Nvme$subsystem", 00:29:19.856 "trtype": "$TEST_TRANSPORT", 00:29:19.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.856 "adrfam": "ipv4", 00:29:19.856 "trsvcid": "$NVMF_PORT", 00:29:19.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.856 "hdgst": ${hdgst:-false}, 00:29:19.856 "ddgst": ${ddgst:-false} 00:29:19.856 }, 00:29:19.856 "method": "bdev_nvme_attach_controller" 00:29:19.856 } 00:29:19.856 EOF 00:29:19.856 )") 00:29:19.856 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:19.856 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:19.856 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:19.856 11:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:19.856 "params": { 00:29:19.856 "name": "Nvme0", 00:29:19.856 "trtype": "tcp", 00:29:19.856 "traddr": "10.0.0.2", 00:29:19.856 "adrfam": "ipv4", 00:29:19.856 "trsvcid": "4420", 00:29:19.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.856 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:19.856 "hdgst": false, 00:29:19.856 "ddgst": false 00:29:19.856 }, 00:29:19.856 "method": "bdev_nvme_attach_controller" 00:29:19.856 }' 00:29:19.856 [2024-11-19 11:40:33.522998] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:19.856 [2024-11-19 11:40:33.523048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2446595 ] 00:29:19.856 [2024-11-19 11:40:33.600145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.116 [2024-11-19 11:40:33.639493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.375 Running I/O for 1 seconds... 00:29:21.312 1984.00 IOPS, 124.00 MiB/s 00:29:21.312 Latency(us) 00:29:21.312 [2024-11-19T10:40:35.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.312 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.312 Verification LBA range: start 0x0 length 0x400 00:29:21.312 Nvme0n1 : 1.01 2033.64 127.10 0.00 0.00 30959.41 5442.34 27354.16 00:29:21.312 [2024-11-19T10:40:35.093Z] =================================================================================================================== 00:29:21.312 [2024-11-19T10:40:35.093Z] Total : 2033.64 127.10 0.00 0.00 30959.41 5442.34 27354.16 00:29:21.312 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:21.312 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:21.312 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:21.312 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.572 rmmod nvme_tcp 00:29:21.572 rmmod nvme_fabrics 00:29:21.572 rmmod nvme_keyring 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2446078 ']' 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2446078 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2446078 ']' 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2446078 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2446078 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2446078' 00:29:21.572 killing process with pid 2446078 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2446078 00:29:21.572 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2446078 00:29:21.832 [2024-11-19 11:40:35.366324] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.832 11:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.739 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.739 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:23.739 00:29:23.739 real 0m13.206s 00:29:23.739 user 0m19.413s 00:29:23.739 sys 0m6.349s 00:29:23.739 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.739 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:23.739 ************************************ 00:29:23.739 END TEST nvmf_host_management 00:29:23.739 ************************************ 00:29:23.739 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:23.739 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:23.739 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.739 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:23.998 ************************************ 00:29:23.998 START TEST nvmf_lvol 00:29:23.999 ************************************ 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:23.999 * Looking for test storage... 00:29:23.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.999 --rc genhtml_branch_coverage=1 00:29:23.999 --rc genhtml_function_coverage=1 00:29:23.999 --rc genhtml_legend=1 00:29:23.999 --rc geninfo_all_blocks=1 00:29:23.999 --rc geninfo_unexecuted_blocks=1 00:29:23.999 00:29:23.999 ' 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.999 --rc genhtml_branch_coverage=1 00:29:23.999 --rc genhtml_function_coverage=1 00:29:23.999 --rc genhtml_legend=1 00:29:23.999 --rc geninfo_all_blocks=1 00:29:23.999 --rc geninfo_unexecuted_blocks=1 00:29:23.999 00:29:23.999 ' 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.999 --rc genhtml_branch_coverage=1 00:29:23.999 --rc genhtml_function_coverage=1 00:29:23.999 --rc genhtml_legend=1 00:29:23.999 --rc geninfo_all_blocks=1 00:29:23.999 --rc geninfo_unexecuted_blocks=1 00:29:23.999 00:29:23.999 ' 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.999 --rc genhtml_branch_coverage=1 00:29:23.999 --rc genhtml_function_coverage=1 00:29:23.999 --rc genhtml_legend=1 00:29:23.999 --rc geninfo_all_blocks=1 00:29:23.999 --rc geninfo_unexecuted_blocks=1 00:29:23.999 00:29:23.999 ' 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.999 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.000 11:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:30.575 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:30.575 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:30.575 Found net devices under 0000:86:00.0: cvl_0_0 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:30.575 Found net devices under 0000:86:00.1: cvl_0_1 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.575 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:29:30.576 00:29:30.576 --- 10.0.0.2 ping statistics --- 00:29:30.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.576 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:29:30.576 00:29:30.576 --- 10.0.0.1 ping statistics --- 00:29:30.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.576 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2450354 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2450354 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2450354 ']' 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.576 [2024-11-19 11:40:43.741549] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:30.576 [2024-11-19 11:40:43.742477] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:30.576 [2024-11-19 11:40:43.742512] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.576 [2024-11-19 11:40:43.819085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:30.576 [2024-11-19 11:40:43.861001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.576 [2024-11-19 11:40:43.861036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.576 [2024-11-19 11:40:43.861044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.576 [2024-11-19 11:40:43.861051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.576 [2024-11-19 11:40:43.861056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.576 [2024-11-19 11:40:43.862315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.576 [2024-11-19 11:40:43.862422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.576 [2024-11-19 11:40:43.862424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.576 [2024-11-19 11:40:43.928734] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:30.576 [2024-11-19 11:40:43.929553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:30.576 [2024-11-19 11:40:43.929700] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:30.576 [2024-11-19 11:40:43.929871] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.576 11:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:30.576 [2024-11-19 11:40:44.163224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.576 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:30.836 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:30.836 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:31.095 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:31.095 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:31.095 11:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:31.354 11:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b0505bd6-1cb8-4808-b314-ae31f8e28648 00:29:31.354 11:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b0505bd6-1cb8-4808-b314-ae31f8e28648 lvol 20 00:29:31.614 11:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8b9d465e-3a8f-4d1c-94a8-a4352906c3aa 00:29:31.614 11:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:31.872 11:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b9d465e-3a8f-4d1c-94a8-a4352906c3aa 00:29:32.131 11:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:32.131 [2024-11-19 11:40:45.839122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.131 11:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:32.390 11:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2450836 00:29:32.390 11:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:32.390 11:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:33.327 11:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8b9d465e-3a8f-4d1c-94a8-a4352906c3aa MY_SNAPSHOT 00:29:33.587 11:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6ff31b16-d11e-4b7a-a932-d25b0b9a9819 00:29:33.587 11:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8b9d465e-3a8f-4d1c-94a8-a4352906c3aa 30 00:29:33.846 11:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6ff31b16-d11e-4b7a-a932-d25b0b9a9819 MY_CLONE 00:29:34.104 11:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bdb22852-0c63-46e3-a8e7-9c79a9ddbed1 00:29:34.104 11:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bdb22852-0c63-46e3-a8e7-9c79a9ddbed1 00:29:34.671 11:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2450836 00:29:42.798 Initializing NVMe Controllers 00:29:42.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:42.798 Controller IO queue size 128, less than required. 00:29:42.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:42.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:42.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:42.798 Initialization complete. Launching workers. 00:29:42.798 ======================================================== 00:29:42.798 Latency(us) 00:29:42.798 Device Information : IOPS MiB/s Average min max 00:29:42.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12039.70 47.03 10634.13 1619.72 62733.46 00:29:42.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11900.90 46.49 10759.36 3583.71 60145.18 00:29:42.798 ======================================================== 00:29:42.798 Total : 23940.60 93.52 10696.39 1619.72 62733.46 00:29:42.798 00:29:42.798 11:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:43.057 11:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8b9d465e-3a8f-4d1c-94a8-a4352906c3aa 00:29:43.342 11:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b0505bd6-1cb8-4808-b314-ae31f8e28648 00:29:43.342 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:43.342 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:43.342 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:43.342 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:43.342 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:43.342 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:43.342 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:43.342 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.342 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:43.342 rmmod nvme_tcp 00:29:43.342 rmmod nvme_fabrics 00:29:43.631 rmmod nvme_keyring 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2450354 ']' 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2450354 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2450354 ']' 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2450354 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2450354 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2450354' 00:29:43.631 killing process with pid 2450354 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2450354 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2450354 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.631 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.919 11:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.825 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.825 00:29:45.825 real 0m21.925s 00:29:45.825 user 0m55.837s 00:29:45.825 sys 0m9.786s 00:29:45.825 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.825 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:45.825 ************************************ 00:29:45.825 END TEST nvmf_lvol 00:29:45.825 ************************************ 00:29:45.825 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:45.825 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:45.825 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:45.825 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:45.825 ************************************ 00:29:45.825 START TEST nvmf_lvs_grow 00:29:45.825 ************************************ 00:29:45.825 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:46.084 * Looking for test storage... 00:29:46.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:46.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.084 --rc genhtml_branch_coverage=1 00:29:46.084 --rc genhtml_function_coverage=1 00:29:46.084 --rc genhtml_legend=1 00:29:46.084 --rc geninfo_all_blocks=1 00:29:46.084 --rc geninfo_unexecuted_blocks=1 00:29:46.084 00:29:46.084 ' 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:46.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.084 --rc genhtml_branch_coverage=1 00:29:46.084 --rc genhtml_function_coverage=1 00:29:46.084 --rc genhtml_legend=1 00:29:46.084 --rc geninfo_all_blocks=1 00:29:46.084 --rc geninfo_unexecuted_blocks=1 00:29:46.084 00:29:46.084 ' 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:46.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.084 --rc genhtml_branch_coverage=1 00:29:46.084 --rc genhtml_function_coverage=1 00:29:46.084 --rc genhtml_legend=1 00:29:46.084 --rc geninfo_all_blocks=1 00:29:46.084 --rc geninfo_unexecuted_blocks=1 00:29:46.084 00:29:46.084 ' 00:29:46.084 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:46.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.084 --rc genhtml_branch_coverage=1 00:29:46.084 --rc genhtml_function_coverage=1 00:29:46.084 --rc genhtml_legend=1 00:29:46.084 --rc geninfo_all_blocks=1 00:29:46.085 --rc geninfo_unexecuted_blocks=1 00:29:46.085 00:29:46.085 ' 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:46.085 11:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:52.660 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:52.660 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:52.660 Found net devices under 0000:86:00.0: cvl_0_0 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:52.660 Found net devices under 0000:86:00.1: cvl_0_1 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.660 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:29:52.661 00:29:52.661 --- 10.0.0.2 ping statistics --- 00:29:52.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.661 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:29:52.661 00:29:52.661 --- 10.0.0.1 ping statistics --- 00:29:52.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.661 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2455982 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2455982 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2455982 ']' 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:52.661 [2024-11-19 11:41:05.762577] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:52.661 [2024-11-19 11:41:05.763509] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:52.661 [2024-11-19 11:41:05.763541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.661 [2024-11-19 11:41:05.842532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.661 [2024-11-19 11:41:05.884458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.661 [2024-11-19 11:41:05.884490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.661 [2024-11-19 11:41:05.884497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.661 [2024-11-19 11:41:05.884503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.661 [2024-11-19 11:41:05.884508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.661 [2024-11-19 11:41:05.885063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.661 [2024-11-19 11:41:05.951919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:52.661 [2024-11-19 11:41:05.952136] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.661 11:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:52.661 [2024-11-19 11:41:06.193701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:52.661 ************************************ 00:29:52.661 START TEST lvs_grow_clean 00:29:52.661 ************************************ 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:52.661 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:52.921 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:52.921 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:52.921 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:29:52.921 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:29:52.921 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:53.180 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:53.180 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:53.180 11:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 lvol 150 00:29:53.439 11:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0a9c609f-925b-4d5e-9e2a-14dd41a9421a 00:29:53.439 11:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:53.439 11:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:53.697 [2024-11-19 11:41:07.253463] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:53.697 [2024-11-19 11:41:07.253601] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:53.697 true 00:29:53.697 11:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:29:53.697 11:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:53.697 11:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:53.697 11:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:53.955 11:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a9c609f-925b-4d5e-9e2a-14dd41a9421a 00:29:54.214 11:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:54.473 [2024-11-19 11:41:08.041900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.473 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:54.473 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2456472 00:29:54.473 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.473 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:54.473 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2456472 /var/tmp/bdevperf.sock 00:29:54.473 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2456472 ']' 00:29:54.473 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:54.473 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.473 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:54.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:54.473 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.473 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:54.732 [2024-11-19 11:41:08.289320] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:54.732 [2024-11-19 11:41:08.289368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2456472 ] 00:29:54.732 [2024-11-19 11:41:08.358251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.732 [2024-11-19 11:41:08.416925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.991 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.991 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:54.991 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:55.251 Nvme0n1 00:29:55.251 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:55.251 [ 00:29:55.251 { 00:29:55.251 "name": "Nvme0n1", 00:29:55.251 "aliases": [ 00:29:55.251 "0a9c609f-925b-4d5e-9e2a-14dd41a9421a" 00:29:55.251 ], 00:29:55.251 "product_name": "NVMe disk", 00:29:55.251 "block_size": 4096, 00:29:55.251 "num_blocks": 38912, 00:29:55.251 "uuid": "0a9c609f-925b-4d5e-9e2a-14dd41a9421a", 00:29:55.251 "numa_id": 1, 00:29:55.251 "assigned_rate_limits": { 00:29:55.251 "rw_ios_per_sec": 0, 00:29:55.251 "rw_mbytes_per_sec": 0, 00:29:55.251 "r_mbytes_per_sec": 0, 00:29:55.251 "w_mbytes_per_sec": 0 00:29:55.251 }, 00:29:55.251 "claimed": false, 00:29:55.251 "zoned": false, 00:29:55.251 "supported_io_types": { 00:29:55.251 "read": true, 00:29:55.251 "write": true, 00:29:55.251 "unmap": true, 00:29:55.251 "flush": true, 00:29:55.251 "reset": true, 00:29:55.251 "nvme_admin": true, 00:29:55.251 "nvme_io": true, 00:29:55.251 "nvme_io_md": false, 00:29:55.251 "write_zeroes": true, 00:29:55.251 "zcopy": false, 00:29:55.251 "get_zone_info": false, 00:29:55.251 "zone_management": false, 00:29:55.251 "zone_append": false, 00:29:55.251 "compare": true, 00:29:55.251 "compare_and_write": true, 00:29:55.251 "abort": true, 00:29:55.251 "seek_hole": false, 00:29:55.251 "seek_data": false, 00:29:55.251 "copy": true, 00:29:55.251 "nvme_iov_md": false 00:29:55.251 }, 00:29:55.251 "memory_domains": [ 00:29:55.251 { 00:29:55.251 "dma_device_id": "system", 00:29:55.251 "dma_device_type": 1 00:29:55.251 } 00:29:55.251 ], 00:29:55.251 "driver_specific": { 00:29:55.251 "nvme": [ 00:29:55.251 { 00:29:55.251 "trid": { 00:29:55.251 "trtype": "TCP", 00:29:55.251 "adrfam": "IPv4", 00:29:55.251 "traddr": "10.0.0.2", 00:29:55.251 "trsvcid": "4420", 00:29:55.251 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:55.251 }, 00:29:55.251 "ctrlr_data": { 00:29:55.251 "cntlid": 1, 00:29:55.251 "vendor_id": "0x8086", 00:29:55.251 "model_number": "SPDK bdev Controller", 00:29:55.251 "serial_number": "SPDK0", 00:29:55.251 "firmware_revision": "25.01", 00:29:55.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.251 "oacs": { 00:29:55.251 "security": 0, 00:29:55.251 "format": 0, 00:29:55.251 "firmware": 0, 00:29:55.251 "ns_manage": 0 00:29:55.251 }, 00:29:55.251 "multi_ctrlr": true, 00:29:55.251 "ana_reporting": false 00:29:55.251 }, 00:29:55.251 "vs": { 00:29:55.251 "nvme_version": "1.3" 00:29:55.251 }, 00:29:55.251 "ns_data": { 00:29:55.251 "id": 1, 00:29:55.251 "can_share": true 00:29:55.251 } 00:29:55.251 } 00:29:55.251 ], 00:29:55.251 "mp_policy": "active_passive" 00:29:55.251 } 00:29:55.251 } 00:29:55.251 ] 00:29:55.251 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:55.251 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2456601 00:29:55.251 11:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:55.510 Running I/O for 10 seconds... 00:29:56.446 Latency(us) 00:29:56.446 [2024-11-19T10:41:10.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.446 Nvme0n1 : 1.00 22005.00 85.96 0.00 0.00 0.00 0.00 0.00 00:29:56.446 [2024-11-19T10:41:10.227Z] =================================================================================================================== 00:29:56.446 [2024-11-19T10:41:10.227Z] Total : 22005.00 85.96 0.00 0.00 0.00 0.00 0.00 00:29:56.446 00:29:57.437 11:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:29:57.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.437 Nvme0n1 : 2.00 22305.50 87.13 0.00 0.00 0.00 0.00 0.00 00:29:57.437 [2024-11-19T10:41:11.218Z] =================================================================================================================== 00:29:57.437 [2024-11-19T10:41:11.218Z] Total : 22305.50 87.13 0.00 0.00 0.00 0.00 0.00 00:29:57.437 00:29:57.437 true 00:29:57.696 11:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:29:57.696 11:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:57.696 11:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:57.696 11:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:57.696 11:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2456601 00:29:58.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.632 Nvme0n1 : 3.00 22427.00 87.61 0.00 0.00 0.00 0.00 0.00 00:29:58.632 [2024-11-19T10:41:12.413Z] =================================================================================================================== 00:29:58.632 [2024-11-19T10:41:12.413Z] Total : 22427.00 87.61 0.00 0.00 0.00 0.00 0.00 00:29:58.632 00:29:59.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.570 Nvme0n1 : 4.00 22527.75 88.00 0.00 0.00 0.00 0.00 0.00 00:29:59.570 [2024-11-19T10:41:13.351Z] =================================================================================================================== 00:29:59.570 [2024-11-19T10:41:13.351Z] Total : 22527.75 88.00 0.00 0.00 0.00 0.00 0.00 00:29:59.570 00:30:00.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.508 Nvme0n1 : 5.00 22607.00 88.31 0.00 0.00 0.00 0.00 0.00 00:30:00.508 [2024-11-19T10:41:14.289Z] =================================================================================================================== 00:30:00.508 [2024-11-19T10:41:14.289Z] Total : 22607.00 88.31 0.00 0.00 0.00 0.00 0.00 00:30:00.508 00:30:01.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.447 Nvme0n1 : 6.00 22667.83 88.55 0.00 0.00 0.00 0.00 0.00 00:30:01.447 [2024-11-19T10:41:15.228Z] =================================================================================================================== 00:30:01.447 [2024-11-19T10:41:15.228Z] Total : 22667.83 88.55 0.00 0.00 0.00 0.00 0.00 00:30:01.447 00:30:02.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.386 Nvme0n1 : 7.00 22700.14 88.67 0.00 0.00 0.00 0.00 0.00 00:30:02.386 [2024-11-19T10:41:16.167Z] =================================================================================================================== 00:30:02.386 [2024-11-19T10:41:16.167Z] Total : 22700.14 88.67 0.00 0.00 0.00 0.00 0.00 00:30:02.386 00:30:03.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:03.324 Nvme0n1 : 8.00 22736.00 88.81 0.00 0.00 0.00 0.00 0.00 00:30:03.324 [2024-11-19T10:41:17.105Z] =================================================================================================================== 00:30:03.324 [2024-11-19T10:41:17.105Z] Total : 22736.00 88.81 0.00 0.00 0.00 0.00 0.00 00:30:03.324 00:30:04.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:04.703 Nvme0n1 : 9.00 22763.89 88.92 0.00 0.00 0.00 0.00 0.00 00:30:04.703 [2024-11-19T10:41:18.484Z] =================================================================================================================== 00:30:04.703 [2024-11-19T10:41:18.484Z] Total : 22763.89 88.92 0.00 0.00 0.00 0.00 0.00 00:30:04.703 00:30:05.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:05.641 Nvme0n1 : 10.00 22786.20 89.01 0.00 0.00 0.00 0.00 0.00 00:30:05.641 [2024-11-19T10:41:19.422Z] =================================================================================================================== 00:30:05.641 [2024-11-19T10:41:19.422Z] Total : 22786.20 89.01 0.00 0.00 0.00 0.00 0.00 00:30:05.641 00:30:05.641 00:30:05.641 Latency(us) 00:30:05.641 [2024-11-19T10:41:19.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:05.641 Nvme0n1 : 10.00 22783.85 89.00 0.00 0.00 5614.58 3219.81 25302.59 00:30:05.641 [2024-11-19T10:41:19.422Z] =================================================================================================================== 00:30:05.641 [2024-11-19T10:41:19.422Z] Total : 22783.85 89.00 0.00 0.00 5614.58 3219.81 25302.59 00:30:05.641 { 00:30:05.641 "results": [ 00:30:05.641 { 00:30:05.641 "job": "Nvme0n1", 00:30:05.641 "core_mask": "0x2", 00:30:05.641 "workload": "randwrite", 00:30:05.641 "status": "finished", 00:30:05.641 "queue_depth": 128, 00:30:05.641 "io_size": 4096, 00:30:05.641 "runtime": 10.003885, 00:30:05.641 "iops": 22783.848474867515, 00:30:05.641 "mibps": 88.99940810495123, 00:30:05.641 "io_failed": 0, 00:30:05.641 "io_timeout": 0, 00:30:05.641 "avg_latency_us": 5614.575692987896, 00:30:05.641 "min_latency_us": 3219.8121739130434, 00:30:05.641 "max_latency_us": 25302.594782608696 00:30:05.641 } 00:30:05.641 ], 00:30:05.641 "core_count": 1 00:30:05.641 } 00:30:05.641 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2456472 00:30:05.641 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2456472 ']' 00:30:05.641 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2456472 00:30:05.641 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:05.641 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.641 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2456472 00:30:05.641 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:05.641 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:05.641 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2456472' 00:30:05.641 killing process with pid 2456472 00:30:05.641 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2456472 00:30:05.641 Received shutdown signal, test time was about 10.000000 seconds 00:30:05.641 00:30:05.641 Latency(us) 00:30:05.641 [2024-11-19T10:41:19.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.641 [2024-11-19T10:41:19.422Z] =================================================================================================================== 00:30:05.641 [2024-11-19T10:41:19.423Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.642 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2456472 00:30:05.642 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.900 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:06.159 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:30:06.159 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:06.418 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:06.418 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:06.418 11:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:06.418 [2024-11-19 11:41:20.125515] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:06.418 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:30:06.418 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:06.418 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:30:06.418 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:06.418 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:06.418 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:06.418 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:06.418 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:06.419 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:06.419 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:06.419 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:06.419 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:30:06.678 request: 00:30:06.678 { 00:30:06.678 "uuid": "09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53", 00:30:06.678 "method": "bdev_lvol_get_lvstores", 00:30:06.678 "req_id": 1 00:30:06.678 } 00:30:06.678 Got JSON-RPC error response 00:30:06.678 response: 00:30:06.678 { 00:30:06.678 "code": -19, 00:30:06.678 "message": "No such device" 00:30:06.678 } 00:30:06.678 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:06.678 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:06.678 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:06.678 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:06.678 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:06.937 aio_bdev 00:30:06.938 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0a9c609f-925b-4d5e-9e2a-14dd41a9421a 00:30:06.938 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=0a9c609f-925b-4d5e-9e2a-14dd41a9421a 00:30:06.938 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:06.938 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:06.938 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:06.938 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:06.938 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:07.197 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0a9c609f-925b-4d5e-9e2a-14dd41a9421a -t 2000 00:30:07.197 [ 00:30:07.197 { 00:30:07.197 "name": "0a9c609f-925b-4d5e-9e2a-14dd41a9421a", 00:30:07.197 "aliases": [ 00:30:07.197 "lvs/lvol" 00:30:07.197 ], 00:30:07.197 "product_name": "Logical Volume", 00:30:07.197 "block_size": 4096, 00:30:07.197 "num_blocks": 38912, 00:30:07.197 "uuid": "0a9c609f-925b-4d5e-9e2a-14dd41a9421a", 00:30:07.197 "assigned_rate_limits": { 00:30:07.197 "rw_ios_per_sec": 0, 00:30:07.197 "rw_mbytes_per_sec": 0, 00:30:07.197 "r_mbytes_per_sec": 0, 00:30:07.197 "w_mbytes_per_sec": 0 00:30:07.197 }, 00:30:07.197 "claimed": false, 00:30:07.197 "zoned": false, 00:30:07.197 "supported_io_types": { 00:30:07.197 "read": true, 00:30:07.197 "write": true, 00:30:07.197 "unmap": true, 00:30:07.197 "flush": false, 00:30:07.197 "reset": true, 00:30:07.197 "nvme_admin": false, 00:30:07.197 "nvme_io": false, 00:30:07.197 "nvme_io_md": false, 00:30:07.197 "write_zeroes": true, 00:30:07.197 "zcopy": false, 00:30:07.197 "get_zone_info": false, 00:30:07.197 "zone_management": false, 00:30:07.197 "zone_append": false, 00:30:07.197 "compare": false, 00:30:07.197 "compare_and_write": false, 00:30:07.197 "abort": false, 00:30:07.197 "seek_hole": true, 00:30:07.197 "seek_data": true, 00:30:07.197 "copy": false, 00:30:07.197 "nvme_iov_md": false 00:30:07.197 }, 00:30:07.197 "driver_specific": { 00:30:07.197 "lvol": { 00:30:07.197 "lvol_store_uuid": "09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53", 00:30:07.197 "base_bdev": "aio_bdev", 00:30:07.197 "thin_provision": false, 00:30:07.197 "num_allocated_clusters": 38, 00:30:07.197 "snapshot": false, 00:30:07.197 "clone": false, 00:30:07.197 "esnap_clone": false 00:30:07.197 } 00:30:07.197 } 00:30:07.197 } 00:30:07.197 ] 00:30:07.197 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:07.197 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:30:07.197 11:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:07.456 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:07.456 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:30:07.456 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:07.715 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:07.715 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0a9c609f-925b-4d5e-9e2a-14dd41a9421a 00:30:07.974 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 09d9e9ed-ba86-4b1c-958e-21d0bd9b6d53 00:30:08.234 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:08.234 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:08.234 00:30:08.234 real 0m15.716s 00:30:08.234 user 0m15.320s 00:30:08.234 sys 0m1.488s 00:30:08.234 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.234 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:08.234 ************************************ 00:30:08.234 END TEST lvs_grow_clean 00:30:08.234 ************************************ 00:30:08.234 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:08.234 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:08.234 11:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.234 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:08.494 ************************************ 00:30:08.494 START TEST lvs_grow_dirty 00:30:08.494 ************************************ 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:08.494 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:08.753 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=31122478-5f87-486c-b87d-0b8a2260d25b 00:30:08.753 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:08.753 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:09.011 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:09.011 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:09.012 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 31122478-5f87-486c-b87d-0b8a2260d25b lvol 150 00:30:09.271 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a25200be-577c-47fa-bebe-ec8bb1a88eff 00:30:09.271 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:09.271 11:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:09.271 [2024-11-19 11:41:23.029440] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:09.271 [2024-11-19 11:41:23.029566] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:09.271 true 00:30:09.271 11:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:09.271 11:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:09.530 11:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:09.530 11:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:09.789 11:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a25200be-577c-47fa-bebe-ec8bb1a88eff 00:30:10.048 11:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:10.048 [2024-11-19 11:41:23.813883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.310 11:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:10.310 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2459058 00:30:10.310 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:10.310 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:10.310 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2459058 /var/tmp/bdevperf.sock 00:30:10.310 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2459058 ']' 00:30:10.310 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:10.310 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:10.310 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:10.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:10.310 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:10.310 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:10.570 [2024-11-19 11:41:24.091673] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:10.570 [2024-11-19 11:41:24.091723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2459058 ] 00:30:10.570 [2024-11-19 11:41:24.165797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.570 [2024-11-19 11:41:24.208821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.570 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.570 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:10.570 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:11.138 Nvme0n1 00:30:11.138 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:11.138 [ 00:30:11.138 { 00:30:11.138 "name": "Nvme0n1", 00:30:11.138 "aliases": [ 00:30:11.138 "a25200be-577c-47fa-bebe-ec8bb1a88eff" 00:30:11.138 ], 00:30:11.138 "product_name": "NVMe disk", 00:30:11.138 "block_size": 4096, 00:30:11.138 "num_blocks": 38912, 00:30:11.138 "uuid": "a25200be-577c-47fa-bebe-ec8bb1a88eff", 00:30:11.138 "numa_id": 1, 00:30:11.138 "assigned_rate_limits": { 00:30:11.138 "rw_ios_per_sec": 0, 00:30:11.139 "rw_mbytes_per_sec": 0, 00:30:11.139 "r_mbytes_per_sec": 0, 00:30:11.139 "w_mbytes_per_sec": 0 00:30:11.139 }, 00:30:11.139 "claimed": false, 00:30:11.139 "zoned": false, 00:30:11.139 "supported_io_types": { 00:30:11.139 "read": true, 00:30:11.139 "write": true, 00:30:11.139 "unmap": true, 00:30:11.139 "flush": true, 00:30:11.139 "reset": true, 00:30:11.139 "nvme_admin": true, 00:30:11.139 "nvme_io": true, 00:30:11.139 "nvme_io_md": false, 00:30:11.139 "write_zeroes": true, 00:30:11.139 "zcopy": false, 00:30:11.139 "get_zone_info": false, 00:30:11.139 "zone_management": false, 00:30:11.139 "zone_append": false, 00:30:11.139 "compare": true, 00:30:11.139 "compare_and_write": true, 00:30:11.139 "abort": true, 00:30:11.139 "seek_hole": false, 00:30:11.139 "seek_data": false, 00:30:11.139 "copy": true, 00:30:11.139 "nvme_iov_md": false 00:30:11.139 }, 00:30:11.139 "memory_domains": [ 00:30:11.139 { 00:30:11.139 "dma_device_id": "system", 00:30:11.139 "dma_device_type": 1 00:30:11.139 } 00:30:11.139 ], 00:30:11.139 "driver_specific": { 00:30:11.139 "nvme": [ 00:30:11.139 { 00:30:11.139 "trid": { 00:30:11.139 "trtype": "TCP", 00:30:11.139 "adrfam": "IPv4", 00:30:11.139 "traddr": "10.0.0.2", 00:30:11.139 "trsvcid": "4420", 00:30:11.139 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:11.139 }, 00:30:11.139 "ctrlr_data": { 00:30:11.139 "cntlid": 1, 00:30:11.139 "vendor_id": "0x8086", 00:30:11.139 "model_number": "SPDK bdev Controller", 00:30:11.139 "serial_number": "SPDK0", 00:30:11.139 "firmware_revision": "25.01", 00:30:11.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:11.139 "oacs": { 00:30:11.139 "security": 0, 00:30:11.139 "format": 0, 00:30:11.139 "firmware": 0, 00:30:11.139 "ns_manage": 0 00:30:11.139 }, 00:30:11.139 "multi_ctrlr": true, 00:30:11.139 "ana_reporting": false 00:30:11.139 }, 00:30:11.139 "vs": { 00:30:11.139 "nvme_version": "1.3" 00:30:11.139 }, 00:30:11.139 "ns_data": { 00:30:11.139 "id": 1, 00:30:11.139 "can_share": true 00:30:11.139 } 00:30:11.139 } 00:30:11.139 ], 00:30:11.139 "mp_policy": "active_passive" 00:30:11.139 } 00:30:11.139 } 00:30:11.139 ] 00:30:11.398 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2459233 00:30:11.398 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:11.398 11:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:11.398 Running I/O for 10 seconds... 00:30:12.335 Latency(us) 00:30:12.335 [2024-11-19T10:41:26.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.335 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:30:12.335 [2024-11-19T10:41:26.116Z] =================================================================================================================== 00:30:12.335 [2024-11-19T10:41:26.116Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:30:12.335 00:30:13.271 11:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:13.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.271 Nvme0n1 : 2.00 22496.00 87.88 0.00 0.00 0.00 0.00 0.00 00:30:13.271 [2024-11-19T10:41:27.053Z] =================================================================================================================== 00:30:13.272 [2024-11-19T10:41:27.053Z] Total : 22496.00 87.88 0.00 0.00 0.00 0.00 0.00 00:30:13.272 00:30:13.530 true 00:30:13.530 11:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:13.530 11:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:13.789 11:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:13.789 11:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:13.789 11:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2459233 00:30:14.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.357 Nvme0n1 : 3.00 22617.33 88.35 0.00 0.00 0.00 0.00 0.00 00:30:14.357 [2024-11-19T10:41:28.138Z] =================================================================================================================== 00:30:14.357 [2024-11-19T10:41:28.138Z] Total : 22617.33 88.35 0.00 0.00 0.00 0.00 0.00 00:30:14.357 00:30:15.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.295 Nvme0n1 : 4.00 22709.75 88.71 0.00 0.00 0.00 0.00 0.00 00:30:15.295 [2024-11-19T10:41:29.076Z] =================================================================================================================== 00:30:15.295 [2024-11-19T10:41:29.076Z] Total : 22709.75 88.71 0.00 0.00 0.00 0.00 0.00 00:30:15.295 00:30:16.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.673 Nvme0n1 : 5.00 22765.20 88.93 0.00 0.00 0.00 0.00 0.00 00:30:16.673 [2024-11-19T10:41:30.454Z] =================================================================================================================== 00:30:16.673 [2024-11-19T10:41:30.454Z] Total : 22765.20 88.93 0.00 0.00 0.00 0.00 0.00 00:30:16.673 00:30:17.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:17.609 Nvme0n1 : 6.00 22781.00 88.99 0.00 0.00 0.00 0.00 0.00 00:30:17.609 [2024-11-19T10:41:31.390Z] =================================================================================================================== 00:30:17.609 [2024-11-19T10:41:31.390Z] Total : 22781.00 88.99 0.00 0.00 0.00 0.00 0.00 00:30:17.609 00:30:18.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.546 Nvme0n1 : 7.00 22810.43 89.10 0.00 0.00 0.00 0.00 0.00 00:30:18.546 [2024-11-19T10:41:32.327Z] =================================================================================================================== 00:30:18.546 [2024-11-19T10:41:32.327Z] Total : 22810.43 89.10 0.00 0.00 0.00 0.00 0.00 00:30:18.546 00:30:19.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.483 Nvme0n1 : 8.00 22832.50 89.19 0.00 0.00 0.00 0.00 0.00 00:30:19.483 [2024-11-19T10:41:33.264Z] =================================================================================================================== 00:30:19.483 [2024-11-19T10:41:33.264Z] Total : 22832.50 89.19 0.00 0.00 0.00 0.00 0.00 00:30:19.483 00:30:20.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.421 Nvme0n1 : 9.00 22856.78 89.28 0.00 0.00 0.00 0.00 0.00 00:30:20.421 [2024-11-19T10:41:34.202Z] =================================================================================================================== 00:30:20.421 [2024-11-19T10:41:34.202Z] Total : 22856.78 89.28 0.00 0.00 0.00 0.00 0.00 00:30:20.421 00:30:21.358 00:30:21.358 Latency(us) 00:30:21.358 [2024-11-19T10:41:35.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.358 Nvme0n1 : 10.00 22878.89 89.37 0.00 0.00 5591.73 3305.29 25986.45 00:30:21.358 [2024-11-19T10:41:35.139Z] =================================================================================================================== 00:30:21.358 [2024-11-19T10:41:35.139Z] Total : 22878.89 89.37 0.00 0.00 5591.73 3305.29 25986.45 00:30:21.358 { 00:30:21.358 "results": [ 00:30:21.358 { 00:30:21.358 "job": "Nvme0n1", 00:30:21.358 "core_mask": "0x2", 00:30:21.358 "workload": "randwrite", 00:30:21.358 "status": "finished", 00:30:21.358 "queue_depth": 128, 00:30:21.358 "io_size": 4096, 00:30:21.358 "runtime": 10.001621, 00:30:21.358 "iops": 22878.891331715127, 00:30:21.358 "mibps": 89.37066926451222, 00:30:21.358 "io_failed": 0, 00:30:21.358 "io_timeout": 0, 00:30:21.358 "avg_latency_us": 5591.734187822226, 00:30:21.358 "min_latency_us": 3305.2939130434784, 00:30:21.358 "max_latency_us": 25986.448695652172 00:30:21.358 } 00:30:21.358 ], 00:30:21.358 "core_count": 1 00:30:21.358 } 00:30:21.358 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2459058 00:30:21.358 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2459058 ']' 00:30:21.358 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2459058 00:30:21.358 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:21.358 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.358 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2459058 00:30:21.358 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:21.358 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:21.358 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2459058' 00:30:21.358 killing process with pid 2459058 00:30:21.358 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2459058 00:30:21.358 Received shutdown signal, test time was about 10.000000 seconds 00:30:21.358 00:30:21.358 Latency(us) 00:30:21.358 [2024-11-19T10:41:35.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.358 [2024-11-19T10:41:35.139Z] =================================================================================================================== 00:30:21.358 [2024-11-19T10:41:35.139Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.358 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2459058 00:30:21.617 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:21.876 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2455982 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2455982 00:30:22.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2455982 Killed "${NVMF_APP[@]}" "$@" 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2460901 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2460901 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2460901 ']' 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.136 11:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:22.395 [2024-11-19 11:41:35.938287] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:22.395 [2024-11-19 11:41:35.939236] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:22.395 [2024-11-19 11:41:35.939272] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.395 [2024-11-19 11:41:36.018040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.395 [2024-11-19 11:41:36.058933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.395 [2024-11-19 11:41:36.058972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.395 [2024-11-19 11:41:36.058980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.395 [2024-11-19 11:41:36.058987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.395 [2024-11-19 11:41:36.058992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.395 [2024-11-19 11:41:36.059512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.395 [2024-11-19 11:41:36.126642] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:22.395 [2024-11-19 11:41:36.126848] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:22.395 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.395 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:22.395 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:22.395 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:22.395 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:22.655 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.655 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:22.655 [2024-11-19 11:41:36.373027] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:22.655 [2024-11-19 11:41:36.373209] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:22.655 [2024-11-19 11:41:36.373294] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:22.655 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:22.655 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a25200be-577c-47fa-bebe-ec8bb1a88eff 00:30:22.655 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a25200be-577c-47fa-bebe-ec8bb1a88eff 00:30:22.655 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:22.655 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:22.655 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:22.655 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:22.655 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:22.913 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a25200be-577c-47fa-bebe-ec8bb1a88eff -t 2000 00:30:23.172 [ 00:30:23.172 { 00:30:23.172 "name": "a25200be-577c-47fa-bebe-ec8bb1a88eff", 00:30:23.172 "aliases": [ 00:30:23.172 "lvs/lvol" 00:30:23.172 ], 00:30:23.172 "product_name": "Logical Volume", 00:30:23.172 "block_size": 4096, 00:30:23.172 "num_blocks": 38912, 00:30:23.172 "uuid": "a25200be-577c-47fa-bebe-ec8bb1a88eff", 00:30:23.172 "assigned_rate_limits": { 00:30:23.172 "rw_ios_per_sec": 0, 00:30:23.172 "rw_mbytes_per_sec": 0, 00:30:23.172 "r_mbytes_per_sec": 0, 00:30:23.172 "w_mbytes_per_sec": 0 00:30:23.172 }, 00:30:23.172 "claimed": false, 00:30:23.172 "zoned": false, 00:30:23.172 "supported_io_types": { 00:30:23.172 "read": true, 00:30:23.172 "write": true, 00:30:23.172 "unmap": true, 00:30:23.172 "flush": false, 00:30:23.172 "reset": true, 00:30:23.172 "nvme_admin": false, 00:30:23.172 "nvme_io": false, 00:30:23.172 "nvme_io_md": false, 00:30:23.172 "write_zeroes": true, 00:30:23.172 "zcopy": false, 00:30:23.172 "get_zone_info": false, 00:30:23.172 "zone_management": false, 00:30:23.172 "zone_append": false, 00:30:23.172 "compare": false, 00:30:23.172 "compare_and_write": false, 00:30:23.172 "abort": false, 00:30:23.172 "seek_hole": true, 00:30:23.172 "seek_data": true, 00:30:23.172 "copy": false, 00:30:23.172 "nvme_iov_md": false 00:30:23.172 }, 00:30:23.172 "driver_specific": { 00:30:23.172 "lvol": { 00:30:23.172 "lvol_store_uuid": "31122478-5f87-486c-b87d-0b8a2260d25b", 00:30:23.172 "base_bdev": "aio_bdev", 00:30:23.172 "thin_provision": false, 00:30:23.172 "num_allocated_clusters": 38, 00:30:23.173 "snapshot": false, 00:30:23.173 "clone": false, 00:30:23.173 "esnap_clone": false 00:30:23.173 } 00:30:23.173 } 00:30:23.173 } 00:30:23.173 ] 00:30:23.173 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:23.173 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:23.173 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:23.432 11:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:23.432 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:23.432 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:23.432 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:23.432 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:23.691 [2024-11-19 11:41:37.352089] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:23.691 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:23.950 request: 00:30:23.950 { 00:30:23.950 "uuid": "31122478-5f87-486c-b87d-0b8a2260d25b", 00:30:23.950 "method": "bdev_lvol_get_lvstores", 00:30:23.950 "req_id": 1 00:30:23.950 } 00:30:23.950 Got JSON-RPC error response 00:30:23.950 response: 00:30:23.950 { 00:30:23.950 "code": -19, 00:30:23.950 "message": "No such device" 00:30:23.950 } 00:30:23.950 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:23.950 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:23.950 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:23.950 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:23.950 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:24.209 aio_bdev 00:30:24.209 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a25200be-577c-47fa-bebe-ec8bb1a88eff 00:30:24.209 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a25200be-577c-47fa-bebe-ec8bb1a88eff 00:30:24.209 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:24.209 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:24.209 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:24.209 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:24.209 11:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:24.468 11:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a25200be-577c-47fa-bebe-ec8bb1a88eff -t 2000 00:30:24.468 [ 00:30:24.468 { 00:30:24.468 "name": "a25200be-577c-47fa-bebe-ec8bb1a88eff", 00:30:24.468 "aliases": [ 00:30:24.468 "lvs/lvol" 00:30:24.468 ], 00:30:24.468 "product_name": "Logical Volume", 00:30:24.468 "block_size": 4096, 00:30:24.468 "num_blocks": 38912, 00:30:24.468 "uuid": "a25200be-577c-47fa-bebe-ec8bb1a88eff", 00:30:24.468 "assigned_rate_limits": { 00:30:24.468 "rw_ios_per_sec": 0, 00:30:24.468 "rw_mbytes_per_sec": 0, 00:30:24.468 "r_mbytes_per_sec": 0, 00:30:24.468 "w_mbytes_per_sec": 0 00:30:24.468 }, 00:30:24.468 "claimed": false, 00:30:24.468 "zoned": false, 00:30:24.468 "supported_io_types": { 00:30:24.468 "read": true, 00:30:24.468 "write": true, 00:30:24.468 "unmap": true, 00:30:24.468 "flush": false, 00:30:24.468 "reset": true, 00:30:24.468 "nvme_admin": false, 00:30:24.468 "nvme_io": false, 00:30:24.468 "nvme_io_md": false, 00:30:24.468 "write_zeroes": true, 00:30:24.468 "zcopy": false, 00:30:24.468 "get_zone_info": false, 00:30:24.468 "zone_management": false, 00:30:24.468 "zone_append": false, 00:30:24.468 "compare": false, 00:30:24.468 "compare_and_write": false, 00:30:24.468 "abort": false, 00:30:24.468 "seek_hole": true, 00:30:24.468 "seek_data": true, 00:30:24.468 "copy": false, 00:30:24.468 "nvme_iov_md": false 00:30:24.468 }, 00:30:24.468 "driver_specific": { 00:30:24.468 "lvol": { 00:30:24.468 "lvol_store_uuid": "31122478-5f87-486c-b87d-0b8a2260d25b", 00:30:24.468 "base_bdev": "aio_bdev", 00:30:24.468 "thin_provision": false, 00:30:24.468 "num_allocated_clusters": 38, 00:30:24.468 "snapshot": false, 00:30:24.468 "clone": false, 00:30:24.468 "esnap_clone": false 00:30:24.468 } 00:30:24.468 } 00:30:24.468 } 00:30:24.468 ] 00:30:24.468 11:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:24.468 11:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:24.468 11:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:24.727 11:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:24.727 11:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:24.727 11:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:24.986 11:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:24.986 11:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a25200be-577c-47fa-bebe-ec8bb1a88eff 00:30:25.246 11:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31122478-5f87-486c-b87d-0b8a2260d25b 00:30:25.246 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:25.505 00:30:25.505 real 0m17.196s 00:30:25.505 user 0m34.601s 00:30:25.505 sys 0m3.875s 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:25.505 ************************************ 00:30:25.505 END TEST lvs_grow_dirty 00:30:25.505 ************************************ 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:25.505 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:25.505 nvmf_trace.0 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:25.764 rmmod nvme_tcp 00:30:25.764 rmmod nvme_fabrics 00:30:25.764 rmmod nvme_keyring 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2460901 ']' 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2460901 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2460901 ']' 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2460901 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2460901 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2460901' 00:30:25.764 killing process with pid 2460901 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2460901 00:30:25.764 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2460901 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.024 11:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.944 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.944 00:30:27.944 real 0m42.132s 00:30:27.944 user 0m52.382s 00:30:27.944 sys 0m10.352s 00:30:27.944 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.944 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:27.944 ************************************ 00:30:27.944 END TEST nvmf_lvs_grow 00:30:27.944 ************************************ 00:30:27.945 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:27.945 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:27.945 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.945 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:28.204 ************************************ 00:30:28.204 START TEST nvmf_bdev_io_wait 00:30:28.204 ************************************ 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:28.204 * Looking for test storage... 00:30:28.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.204 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:28.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.205 --rc genhtml_branch_coverage=1 00:30:28.205 --rc genhtml_function_coverage=1 00:30:28.205 --rc genhtml_legend=1 00:30:28.205 --rc geninfo_all_blocks=1 00:30:28.205 --rc geninfo_unexecuted_blocks=1 00:30:28.205 00:30:28.205 ' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:28.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.205 --rc genhtml_branch_coverage=1 00:30:28.205 --rc genhtml_function_coverage=1 00:30:28.205 --rc genhtml_legend=1 00:30:28.205 --rc geninfo_all_blocks=1 00:30:28.205 --rc geninfo_unexecuted_blocks=1 00:30:28.205 00:30:28.205 ' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:28.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.205 --rc genhtml_branch_coverage=1 00:30:28.205 --rc genhtml_function_coverage=1 00:30:28.205 --rc genhtml_legend=1 00:30:28.205 --rc geninfo_all_blocks=1 00:30:28.205 --rc geninfo_unexecuted_blocks=1 00:30:28.205 00:30:28.205 ' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:28.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.205 --rc genhtml_branch_coverage=1 00:30:28.205 --rc genhtml_function_coverage=1 00:30:28.205 --rc genhtml_legend=1 00:30:28.205 --rc geninfo_all_blocks=1 00:30:28.205 --rc geninfo_unexecuted_blocks=1 00:30:28.205 00:30:28.205 ' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.205 11:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:34.924 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:34.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:34.924 Found net devices under 0000:86:00.0: cvl_0_0 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:34.924 Found net devices under 0000:86:00.1: cvl_0_1 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.924 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:30:34.925 00:30:34.925 --- 10.0.0.2 ping statistics --- 00:30:34.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.925 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:30:34.925 00:30:34.925 --- 10.0.0.1 ping statistics --- 00:30:34.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.925 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2464994 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2464994 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2464994 ']' 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.925 11:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.925 [2024-11-19 11:41:47.925700] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:34.925 [2024-11-19 11:41:47.926643] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:34.925 [2024-11-19 11:41:47.926675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.925 [2024-11-19 11:41:48.005782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:34.925 [2024-11-19 11:41:48.049505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.925 [2024-11-19 11:41:48.049542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.925 [2024-11-19 11:41:48.049549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.925 [2024-11-19 11:41:48.049555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.925 [2024-11-19 11:41:48.049560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.925 [2024-11-19 11:41:48.051122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.925 [2024-11-19 11:41:48.051233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.925 [2024-11-19 11:41:48.051341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.925 [2024-11-19 11:41:48.051342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:34.925 [2024-11-19 11:41:48.051600] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.925 [2024-11-19 11:41:48.176227] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:34.925 [2024-11-19 11:41:48.176921] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:34.925 [2024-11-19 11:41:48.177139] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:34.925 [2024-11-19 11:41:48.177266] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.925 [2024-11-19 11:41:48.187883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.925 Malloc0 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.925 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:34.926 [2024-11-19 11:41:48.256019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2465193 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2465195 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.926 { 00:30:34.926 "params": { 00:30:34.926 "name": "Nvme$subsystem", 00:30:34.926 "trtype": "$TEST_TRANSPORT", 00:30:34.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.926 "adrfam": "ipv4", 00:30:34.926 "trsvcid": "$NVMF_PORT", 00:30:34.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.926 "hdgst": ${hdgst:-false}, 00:30:34.926 "ddgst": ${ddgst:-false} 00:30:34.926 }, 00:30:34.926 "method": "bdev_nvme_attach_controller" 00:30:34.926 } 00:30:34.926 EOF 00:30:34.926 )") 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2465197 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2465200 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.926 { 00:30:34.926 "params": { 00:30:34.926 "name": "Nvme$subsystem", 00:30:34.926 "trtype": "$TEST_TRANSPORT", 00:30:34.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.926 "adrfam": "ipv4", 00:30:34.926 "trsvcid": "$NVMF_PORT", 00:30:34.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.926 "hdgst": ${hdgst:-false}, 00:30:34.926 "ddgst": ${ddgst:-false} 00:30:34.926 }, 00:30:34.926 "method": "bdev_nvme_attach_controller" 00:30:34.926 } 00:30:34.926 EOF 00:30:34.926 )") 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.926 { 00:30:34.926 "params": { 00:30:34.926 "name": "Nvme$subsystem", 00:30:34.926 "trtype": "$TEST_TRANSPORT", 00:30:34.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.926 "adrfam": "ipv4", 00:30:34.926 "trsvcid": "$NVMF_PORT", 00:30:34.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.926 "hdgst": ${hdgst:-false}, 00:30:34.926 "ddgst": ${ddgst:-false} 00:30:34.926 }, 00:30:34.926 "method": "bdev_nvme_attach_controller" 00:30:34.926 } 00:30:34.926 EOF 00:30:34.926 )") 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.926 { 00:30:34.926 "params": { 00:30:34.926 "name": "Nvme$subsystem", 00:30:34.926 "trtype": "$TEST_TRANSPORT", 00:30:34.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.926 "adrfam": "ipv4", 00:30:34.926 "trsvcid": "$NVMF_PORT", 00:30:34.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.926 "hdgst": ${hdgst:-false}, 00:30:34.926 "ddgst": ${ddgst:-false} 00:30:34.926 }, 00:30:34.926 "method": "bdev_nvme_attach_controller" 00:30:34.926 } 00:30:34.926 EOF 00:30:34.926 )") 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2465193 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:34.926 "params": { 00:30:34.926 "name": "Nvme1", 00:30:34.926 "trtype": "tcp", 00:30:34.926 "traddr": "10.0.0.2", 00:30:34.926 "adrfam": "ipv4", 00:30:34.926 "trsvcid": "4420", 00:30:34.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.926 "hdgst": false, 00:30:34.926 "ddgst": false 00:30:34.926 }, 00:30:34.926 "method": "bdev_nvme_attach_controller" 00:30:34.926 }' 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:34.926 "params": { 00:30:34.926 "name": "Nvme1", 00:30:34.926 "trtype": "tcp", 00:30:34.926 "traddr": "10.0.0.2", 00:30:34.926 "adrfam": "ipv4", 00:30:34.926 "trsvcid": "4420", 00:30:34.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.926 "hdgst": false, 00:30:34.926 "ddgst": false 00:30:34.926 }, 00:30:34.926 "method": "bdev_nvme_attach_controller" 00:30:34.926 }' 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:34.926 "params": { 00:30:34.926 "name": "Nvme1", 00:30:34.926 "trtype": "tcp", 00:30:34.926 "traddr": "10.0.0.2", 00:30:34.926 "adrfam": "ipv4", 00:30:34.926 "trsvcid": "4420", 00:30:34.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.926 "hdgst": false, 00:30:34.926 "ddgst": false 00:30:34.926 }, 00:30:34.926 "method": "bdev_nvme_attach_controller" 00:30:34.926 }' 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:34.926 11:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:34.926 "params": { 00:30:34.926 "name": "Nvme1", 00:30:34.926 "trtype": "tcp", 00:30:34.926 "traddr": "10.0.0.2", 00:30:34.926 "adrfam": "ipv4", 00:30:34.926 "trsvcid": "4420", 00:30:34.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.926 "hdgst": false, 00:30:34.926 "ddgst": false 00:30:34.926 }, 00:30:34.926 "method": "bdev_nvme_attach_controller" 00:30:34.926 }' 00:30:34.926 [2024-11-19 11:41:48.307614] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:34.927 [2024-11-19 11:41:48.307659] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:34.927 [2024-11-19 11:41:48.308849] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:34.927 [2024-11-19 11:41:48.308848] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:34.927 [2024-11-19 11:41:48.308908] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 11:41:48.308908] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:34.927 --proc-type=auto ] 00:30:34.927 [2024-11-19 11:41:48.312734] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:34.927 [2024-11-19 11:41:48.312774] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:34.927 [2024-11-19 11:41:48.453780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.927 [2024-11-19 11:41:48.486518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:34.927 [2024-11-19 11:41:48.563603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.927 [2024-11-19 11:41:48.606780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:34.927 [2024-11-19 11:41:48.656987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.187 [2024-11-19 11:41:48.711014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.187 [2024-11-19 11:41:48.711265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:35.187 [2024-11-19 11:41:48.753943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:35.187 Running I/O for 1 seconds... 00:30:35.187 Running I/O for 1 seconds... 00:30:35.187 Running I/O for 1 seconds... 00:30:35.187 Running I/O for 1 seconds... 00:30:36.124 8029.00 IOPS, 31.36 MiB/s 00:30:36.124 Latency(us) 00:30:36.124 [2024-11-19T10:41:49.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.124 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:36.124 Nvme1n1 : 1.02 8029.62 31.37 0.00 0.00 15827.79 1503.05 23706.94 00:30:36.124 [2024-11-19T10:41:49.905Z] =================================================================================================================== 00:30:36.124 [2024-11-19T10:41:49.905Z] Total : 8029.62 31.37 0.00 0.00 15827.79 1503.05 23706.94 00:30:36.124 11730.00 IOPS, 45.82 MiB/s 00:30:36.124 Latency(us) 00:30:36.124 [2024-11-19T10:41:49.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.124 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:36.124 Nvme1n1 : 1.01 11775.72 46.00 0.00 0.00 10829.54 4217.10 15386.71 00:30:36.124 [2024-11-19T10:41:49.905Z] =================================================================================================================== 00:30:36.124 [2024-11-19T10:41:49.905Z] Total : 11775.72 46.00 0.00 0.00 10829.54 4217.10 15386.71 00:30:36.383 8125.00 IOPS, 31.74 MiB/s 00:30:36.383 Latency(us) 00:30:36.383 [2024-11-19T10:41:50.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.383 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:36.383 Nvme1n1 : 1.01 8261.06 32.27 0.00 0.00 15463.82 2763.91 31685.23 00:30:36.383 [2024-11-19T10:41:50.164Z] =================================================================================================================== 00:30:36.383 [2024-11-19T10:41:50.164Z] Total : 8261.06 32.27 0.00 0.00 15463.82 2763.91 31685.23 00:30:36.383 246144.00 IOPS, 961.50 MiB/s [2024-11-19T10:41:50.164Z] 11:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2465195 00:30:36.383 00:30:36.383 Latency(us) 00:30:36.383 [2024-11-19T10:41:50.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.383 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:36.383 Nvme1n1 : 1.00 245764.18 960.02 0.00 0.00 517.89 231.51 1538.67 00:30:36.383 [2024-11-19T10:41:50.164Z] =================================================================================================================== 00:30:36.383 [2024-11-19T10:41:50.164Z] Total : 245764.18 960.02 0.00 0.00 517.89 231.51 1538.67 00:30:36.383 11:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2465197 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2465200 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.383 rmmod nvme_tcp 00:30:36.383 rmmod nvme_fabrics 00:30:36.383 rmmod nvme_keyring 00:30:36.383 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.642 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:36.642 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:36.642 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2464994 ']' 00:30:36.642 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2464994 00:30:36.642 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2464994 ']' 00:30:36.642 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2464994 00:30:36.642 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464994 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464994' 00:30:36.643 killing process with pid 2464994 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2464994 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2464994 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.643 11:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.180 00:30:39.180 real 0m10.706s 00:30:39.180 user 0m14.806s 00:30:39.180 sys 0m6.431s 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:39.180 ************************************ 00:30:39.180 END TEST nvmf_bdev_io_wait 00:30:39.180 ************************************ 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:39.180 ************************************ 00:30:39.180 START TEST nvmf_queue_depth 00:30:39.180 ************************************ 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:39.180 * Looking for test storage... 00:30:39.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.180 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:39.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.181 --rc genhtml_branch_coverage=1 00:30:39.181 --rc genhtml_function_coverage=1 00:30:39.181 --rc genhtml_legend=1 00:30:39.181 --rc geninfo_all_blocks=1 00:30:39.181 --rc geninfo_unexecuted_blocks=1 00:30:39.181 00:30:39.181 ' 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:39.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.181 --rc genhtml_branch_coverage=1 00:30:39.181 --rc genhtml_function_coverage=1 00:30:39.181 --rc genhtml_legend=1 00:30:39.181 --rc geninfo_all_blocks=1 00:30:39.181 --rc geninfo_unexecuted_blocks=1 00:30:39.181 00:30:39.181 ' 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:39.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.181 --rc genhtml_branch_coverage=1 00:30:39.181 --rc genhtml_function_coverage=1 00:30:39.181 --rc genhtml_legend=1 00:30:39.181 --rc geninfo_all_blocks=1 00:30:39.181 --rc geninfo_unexecuted_blocks=1 00:30:39.181 00:30:39.181 ' 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:39.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.181 --rc genhtml_branch_coverage=1 00:30:39.181 --rc genhtml_function_coverage=1 00:30:39.181 --rc genhtml_legend=1 00:30:39.181 --rc geninfo_all_blocks=1 00:30:39.181 --rc geninfo_unexecuted_blocks=1 00:30:39.181 00:30:39.181 ' 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.181 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.182 11:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:45.757 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:45.757 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:45.757 Found net devices under 0000:86:00.0: cvl_0_0 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:45.757 Found net devices under 0000:86:00.1: cvl_0_1 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:30:45.757 00:30:45.757 --- 10.0.0.2 ping statistics --- 00:30:45.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.757 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:30:45.757 00:30:45.757 --- 10.0.0.1 ping statistics --- 00:30:45.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.757 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.757 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2468992 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2468992 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2468992 ']' 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.758 [2024-11-19 11:41:58.666687] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.758 [2024-11-19 11:41:58.667668] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:45.758 [2024-11-19 11:41:58.667708] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.758 [2024-11-19 11:41:58.748679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.758 [2024-11-19 11:41:58.789532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.758 [2024-11-19 11:41:58.789569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.758 [2024-11-19 11:41:58.789577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.758 [2024-11-19 11:41:58.789582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.758 [2024-11-19 11:41:58.789587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.758 [2024-11-19 11:41:58.790149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.758 [2024-11-19 11:41:58.857652] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.758 [2024-11-19 11:41:58.857862] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.758 [2024-11-19 11:41:58.922798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.758 Malloc0 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.758 [2024-11-19 11:41:58.994929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2469015 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:45.758 11:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2469015 /var/tmp/bdevperf.sock 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2469015 ']' 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:45.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.758 [2024-11-19 11:41:59.047230] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:45.758 [2024-11-19 11:41:59.047273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469015 ] 00:30:45.758 [2024-11-19 11:41:59.122117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.758 [2024-11-19 11:41:59.164872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:45.758 NVMe0n1 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.758 11:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:46.017 Running I/O for 10 seconds... 00:30:47.915 11264.00 IOPS, 44.00 MiB/s [2024-11-19T10:42:02.632Z] 11777.00 IOPS, 46.00 MiB/s [2024-11-19T10:42:03.569Z] 11945.67 IOPS, 46.66 MiB/s [2024-11-19T10:42:04.947Z] 12014.75 IOPS, 46.93 MiB/s [2024-11-19T10:42:05.884Z] 12055.20 IOPS, 47.09 MiB/s [2024-11-19T10:42:06.820Z] 12007.00 IOPS, 46.90 MiB/s [2024-11-19T10:42:07.757Z] 12042.86 IOPS, 47.04 MiB/s [2024-11-19T10:42:08.694Z] 12087.12 IOPS, 47.22 MiB/s [2024-11-19T10:42:09.632Z] 12144.78 IOPS, 47.44 MiB/s [2024-11-19T10:42:09.632Z] 12166.50 IOPS, 47.53 MiB/s 00:30:55.851 Latency(us) 00:30:55.851 [2024-11-19T10:42:09.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.851 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:55.851 Verification LBA range: start 0x0 length 0x4000 00:30:55.851 NVMe0n1 : 10.07 12184.97 47.60 0.00 0.00 83744.63 19603.81 54708.31 00:30:55.851 [2024-11-19T10:42:09.632Z] =================================================================================================================== 00:30:55.851 [2024-11-19T10:42:09.632Z] Total : 12184.97 47.60 0.00 0.00 83744.63 19603.81 54708.31 00:30:56.110 { 00:30:56.110 "results": [ 00:30:56.110 { 00:30:56.110 "job": "NVMe0n1", 00:30:56.110 "core_mask": "0x1", 00:30:56.110 "workload": "verify", 00:30:56.110 "status": "finished", 00:30:56.110 "verify_range": { 00:30:56.110 "start": 0, 00:30:56.110 "length": 16384 00:30:56.110 }, 00:30:56.110 "queue_depth": 1024, 00:30:56.110 "io_size": 4096, 00:30:56.110 "runtime": 10.066666, 00:30:56.110 "iops": 12184.967694368721, 00:30:56.110 "mibps": 47.597530056127816, 00:30:56.110 "io_failed": 0, 00:30:56.110 "io_timeout": 0, 00:30:56.110 "avg_latency_us": 83744.62975038512, 00:30:56.110 "min_latency_us": 19603.812173913044, 00:30:56.110 "max_latency_us": 54708.31304347826 00:30:56.110 } 00:30:56.110 ], 00:30:56.110 "core_count": 1 00:30:56.110 } 00:30:56.110 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2469015 00:30:56.110 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2469015 ']' 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2469015 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469015 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469015' 00:30:56.111 killing process with pid 2469015 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2469015 00:30:56.111 Received shutdown signal, test time was about 10.000000 seconds 00:30:56.111 00:30:56.111 Latency(us) 00:30:56.111 [2024-11-19T10:42:09.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.111 [2024-11-19T10:42:09.892Z] =================================================================================================================== 00:30:56.111 [2024-11-19T10:42:09.892Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2469015 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:56.111 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:56.111 rmmod nvme_tcp 00:30:56.369 rmmod nvme_fabrics 00:30:56.369 rmmod nvme_keyring 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2468992 ']' 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2468992 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2468992 ']' 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2468992 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2468992 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2468992' 00:30:56.369 killing process with pid 2468992 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2468992 00:30:56.369 11:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2468992 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.628 11:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.537 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:58.537 00:30:58.537 real 0m19.729s 00:30:58.537 user 0m22.757s 00:30:58.537 sys 0m6.367s 00:30:58.537 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.537 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:58.537 ************************************ 00:30:58.537 END TEST nvmf_queue_depth 00:30:58.537 ************************************ 00:30:58.537 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:58.537 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:58.537 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.537 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:58.797 ************************************ 00:30:58.797 START TEST nvmf_target_multipath 00:30:58.797 ************************************ 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:58.797 * Looking for test storage... 00:30:58.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:58.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.797 --rc genhtml_branch_coverage=1 00:30:58.797 --rc genhtml_function_coverage=1 00:30:58.797 --rc genhtml_legend=1 00:30:58.797 --rc geninfo_all_blocks=1 00:30:58.797 --rc geninfo_unexecuted_blocks=1 00:30:58.797 00:30:58.797 ' 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:58.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.797 --rc genhtml_branch_coverage=1 00:30:58.797 --rc genhtml_function_coverage=1 00:30:58.797 --rc genhtml_legend=1 00:30:58.797 --rc geninfo_all_blocks=1 00:30:58.797 --rc geninfo_unexecuted_blocks=1 00:30:58.797 00:30:58.797 ' 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:58.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.797 --rc genhtml_branch_coverage=1 00:30:58.797 --rc genhtml_function_coverage=1 00:30:58.797 --rc genhtml_legend=1 00:30:58.797 --rc geninfo_all_blocks=1 00:30:58.797 --rc geninfo_unexecuted_blocks=1 00:30:58.797 00:30:58.797 ' 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:58.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.797 --rc genhtml_branch_coverage=1 00:30:58.797 --rc genhtml_function_coverage=1 00:30:58.797 --rc genhtml_legend=1 00:30:58.797 --rc geninfo_all_blocks=1 00:30:58.797 --rc geninfo_unexecuted_blocks=1 00:30:58.797 00:30:58.797 ' 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.797 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:58.798 11:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:05.369 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:05.369 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:05.369 Found net devices under 0000:86:00.0: cvl_0_0 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.369 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:05.370 Found net devices under 0000:86:00.1: cvl_0_1 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:05.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:31:05.370 00:31:05.370 --- 10.0.0.2 ping statistics --- 00:31:05.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.370 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:05.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:31:05.370 00:31:05.370 --- 10.0.0.1 ping statistics --- 00:31:05.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.370 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:05.370 only one NIC for nvmf test 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:05.370 rmmod nvme_tcp 00:31:05.370 rmmod nvme_fabrics 00:31:05.370 rmmod nvme_keyring 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.370 11:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.771 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.771 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:06.771 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:06.771 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.771 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:06.771 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.771 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:06.771 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.771 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.771 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.030 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:07.030 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:07.030 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:07.030 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.030 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:07.030 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:07.030 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:07.030 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:07.031 00:31:07.031 real 0m8.256s 00:31:07.031 user 0m1.813s 00:31:07.031 sys 0m4.452s 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:07.031 ************************************ 00:31:07.031 END TEST nvmf_target_multipath 00:31:07.031 ************************************ 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:07.031 ************************************ 00:31:07.031 START TEST nvmf_zcopy 00:31:07.031 ************************************ 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:07.031 * Looking for test storage... 00:31:07.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:07.031 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:07.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.292 --rc genhtml_branch_coverage=1 00:31:07.292 --rc genhtml_function_coverage=1 00:31:07.292 --rc genhtml_legend=1 00:31:07.292 --rc geninfo_all_blocks=1 00:31:07.292 --rc geninfo_unexecuted_blocks=1 00:31:07.292 00:31:07.292 ' 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:07.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.292 --rc genhtml_branch_coverage=1 00:31:07.292 --rc genhtml_function_coverage=1 00:31:07.292 --rc genhtml_legend=1 00:31:07.292 --rc geninfo_all_blocks=1 00:31:07.292 --rc geninfo_unexecuted_blocks=1 00:31:07.292 00:31:07.292 ' 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:07.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.292 --rc genhtml_branch_coverage=1 00:31:07.292 --rc genhtml_function_coverage=1 00:31:07.292 --rc genhtml_legend=1 00:31:07.292 --rc geninfo_all_blocks=1 00:31:07.292 --rc geninfo_unexecuted_blocks=1 00:31:07.292 00:31:07.292 ' 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:07.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.292 --rc genhtml_branch_coverage=1 00:31:07.292 --rc genhtml_function_coverage=1 00:31:07.292 --rc genhtml_legend=1 00:31:07.292 --rc geninfo_all_blocks=1 00:31:07.292 --rc geninfo_unexecuted_blocks=1 00:31:07.292 00:31:07.292 ' 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:07.292 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:07.293 11:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:13.880 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:13.880 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:13.880 Found net devices under 0000:86:00.0: cvl_0_0 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:13.880 Found net devices under 0000:86:00.1: cvl_0_1 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:13.880 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:13.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:31:13.881 00:31:13.881 --- 10.0.0.2 ping statistics --- 00:31:13.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.881 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:31:13.881 00:31:13.881 --- 10.0.0.1 ping statistics --- 00:31:13.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.881 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2478186 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2478186 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2478186 ']' 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:13.881 11:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:13.881 [2024-11-19 11:42:26.816886] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:13.881 [2024-11-19 11:42:26.817811] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:31:13.881 [2024-11-19 11:42:26.817845] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.881 [2024-11-19 11:42:26.896512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.881 [2024-11-19 11:42:26.938053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.881 [2024-11-19 11:42:26.938089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.881 [2024-11-19 11:42:26.938096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.881 [2024-11-19 11:42:26.938102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.881 [2024-11-19 11:42:26.938107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.881 [2024-11-19 11:42:26.938623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.881 [2024-11-19 11:42:27.004011] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:13.881 [2024-11-19 11:42:27.004223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:13.881 [2024-11-19 11:42:27.067349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:13.881 [2024-11-19 11:42:27.095554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:13.881 malloc0 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:13.881 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.882 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:13.882 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:13.882 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:13.882 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:13.882 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:13.882 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:13.882 { 00:31:13.882 "params": { 00:31:13.882 "name": "Nvme$subsystem", 00:31:13.882 "trtype": "$TEST_TRANSPORT", 00:31:13.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.882 "adrfam": "ipv4", 00:31:13.882 "trsvcid": "$NVMF_PORT", 00:31:13.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.882 "hdgst": ${hdgst:-false}, 00:31:13.882 "ddgst": ${ddgst:-false} 00:31:13.882 }, 00:31:13.882 "method": "bdev_nvme_attach_controller" 00:31:13.882 } 00:31:13.882 EOF 00:31:13.882 )") 00:31:13.882 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:13.882 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:13.882 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:13.882 11:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:13.882 "params": { 00:31:13.882 "name": "Nvme1", 00:31:13.882 "trtype": "tcp", 00:31:13.882 "traddr": "10.0.0.2", 00:31:13.882 "adrfam": "ipv4", 00:31:13.882 "trsvcid": "4420", 00:31:13.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:13.882 "hdgst": false, 00:31:13.882 "ddgst": false 00:31:13.882 }, 00:31:13.882 "method": "bdev_nvme_attach_controller" 00:31:13.882 }' 00:31:13.882 [2024-11-19 11:42:27.188797] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:31:13.882 [2024-11-19 11:42:27.188843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478208 ] 00:31:13.882 [2024-11-19 11:42:27.266004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.882 [2024-11-19 11:42:27.307263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.141 Running I/O for 10 seconds... 00:31:16.010 8363.00 IOPS, 65.34 MiB/s [2024-11-19T10:42:30.724Z] 8422.00 IOPS, 65.80 MiB/s [2024-11-19T10:42:32.102Z] 8421.67 IOPS, 65.79 MiB/s [2024-11-19T10:42:33.039Z] 8434.75 IOPS, 65.90 MiB/s [2024-11-19T10:42:33.976Z] 8433.40 IOPS, 65.89 MiB/s [2024-11-19T10:42:34.912Z] 8440.67 IOPS, 65.94 MiB/s [2024-11-19T10:42:35.847Z] 8448.86 IOPS, 66.01 MiB/s [2024-11-19T10:42:36.782Z] 8452.38 IOPS, 66.03 MiB/s [2024-11-19T10:42:37.718Z] 8456.00 IOPS, 66.06 MiB/s [2024-11-19T10:42:37.718Z] 8454.70 IOPS, 66.05 MiB/s 00:31:23.937 Latency(us) 00:31:23.937 [2024-11-19T10:42:37.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.937 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:23.937 Verification LBA range: start 0x0 length 0x1000 00:31:23.937 Nvme1n1 : 10.01 8459.47 66.09 0.00 0.00 15088.70 379.33 22111.28 00:31:23.937 [2024-11-19T10:42:37.718Z] =================================================================================================================== 00:31:23.937 [2024-11-19T10:42:37.718Z] Total : 8459.47 66.09 0.00 0.00 15088.70 379.33 22111.28 00:31:24.196 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2480027 00:31:24.196 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:24.196 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.197 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:24.197 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:24.197 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:24.197 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:24.197 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:24.197 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:24.197 { 00:31:24.197 "params": { 00:31:24.197 "name": "Nvme$subsystem", 00:31:24.197 "trtype": "$TEST_TRANSPORT", 00:31:24.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.197 "adrfam": "ipv4", 00:31:24.197 "trsvcid": "$NVMF_PORT", 00:31:24.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.197 "hdgst": ${hdgst:-false}, 00:31:24.197 "ddgst": ${ddgst:-false} 00:31:24.197 }, 00:31:24.197 "method": "bdev_nvme_attach_controller" 00:31:24.197 } 00:31:24.197 EOF 00:31:24.197 )") 00:31:24.197 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:24.197 [2024-11-19 11:42:37.858964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.197 [2024-11-19 11:42:37.858995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.197 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:24.197 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:24.197 11:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:24.197 "params": { 00:31:24.197 "name": "Nvme1", 00:31:24.197 "trtype": "tcp", 00:31:24.197 "traddr": "10.0.0.2", 00:31:24.197 "adrfam": "ipv4", 00:31:24.197 "trsvcid": "4420", 00:31:24.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:24.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:24.197 "hdgst": false, 00:31:24.197 "ddgst": false 00:31:24.197 }, 00:31:24.197 "method": "bdev_nvme_attach_controller" 00:31:24.197 }' 00:31:24.197 [2024-11-19 11:42:37.870925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.197 [2024-11-19 11:42:37.870939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.197 [2024-11-19 11:42:37.882921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.197 [2024-11-19 11:42:37.882933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.197 [2024-11-19 11:42:37.894922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.197 [2024-11-19 11:42:37.894932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.197 [2024-11-19 11:42:37.897698] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:31:24.197 [2024-11-19 11:42:37.897742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480027 ] 00:31:24.197 [2024-11-19 11:42:37.906922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.197 [2024-11-19 11:42:37.906933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.197 [2024-11-19 11:42:37.918920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.197 [2024-11-19 11:42:37.918930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.197 [2024-11-19 11:42:37.930923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.197 [2024-11-19 11:42:37.930933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.197 [2024-11-19 11:42:37.942922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.197 [2024-11-19 11:42:37.942932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.197 [2024-11-19 11:42:37.954922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.197 [2024-11-19 11:42:37.954932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.197 [2024-11-19 11:42:37.966930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.197 [2024-11-19 11:42:37.966945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.197 [2024-11-19 11:42:37.971877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.456 [2024-11-19 11:42:37.978923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:37.978934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:37.990921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:37.990935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.002922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.002932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.013868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.456 [2024-11-19 11:42:38.014923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.014935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.026932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.026956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.038928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.038951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.050924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.050939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.062921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.062932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.074925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.074937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.086921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.086931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.098932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.098959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.110945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.110963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.122929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.122943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.134926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.134940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.146929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.146944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.158928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.158954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 Running I/O for 5 seconds... 00:31:24.456 [2024-11-19 11:42:38.175850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.175871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.191059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.191078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.202007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.202026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.217396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.217415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.456 [2024-11-19 11:42:38.232173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.456 [2024-11-19 11:42:38.232192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.247870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.247891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.263278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.263296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.278972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.278992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.291759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.291777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.306921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.306940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.318313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.318331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.332622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.332640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.347934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.347959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.362684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.362703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.374365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.374384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.388707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.388725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.403579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.403597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.419325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.419343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.435187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.435206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.447449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.447467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.460740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.460758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.475542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.475561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.715 [2024-11-19 11:42:38.491360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.715 [2024-11-19 11:42:38.491379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.506874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.506892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.520888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.520907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.536191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.536209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.551243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.551261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.563813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.563832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.576887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.576905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.592465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.592484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.607879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.607899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.622807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.622827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.635131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.635151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.649434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.649452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.664367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.664386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.679006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.679025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.690842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.690862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.705059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.705078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.720294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.720313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.735487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.735505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.974 [2024-11-19 11:42:38.748596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.974 [2024-11-19 11:42:38.748614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.233 [2024-11-19 11:42:38.764383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.233 [2024-11-19 11:42:38.764403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.233 [2024-11-19 11:42:38.779620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.233 [2024-11-19 11:42:38.779639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.233 [2024-11-19 11:42:38.794390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.233 [2024-11-19 11:42:38.794410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.233 [2024-11-19 11:42:38.807227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.233 [2024-11-19 11:42:38.807247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.820323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.820342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.835847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.835866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.850566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.850585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.863703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.863722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.879130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.879149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.890161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.890180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.904605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.904623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.919445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.919463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.934997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.935016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.946497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.946523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.960788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.960809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.976063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.976084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:38.991115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:38.991135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.234 [2024-11-19 11:42:39.002217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.234 [2024-11-19 11:42:39.002236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.016547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.016568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.031999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.032019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.043081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.043101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.056980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.057000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.072338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.072356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.087551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.087570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.098829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.098848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.112685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.112705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.128067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.128087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.143077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.143097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.153516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.153536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.168698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.168717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 16250.00 IOPS, 126.95 MiB/s [2024-11-19T10:42:39.274Z] [2024-11-19 11:42:39.183462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.183480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.198453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.198473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.210092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.210112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.225032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.225050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.240250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.240270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.493 [2024-11-19 11:42:39.255578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.493 [2024-11-19 11:42:39.255597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.271365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.271384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.283905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.283923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.299501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.299520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.315410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.315429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.327884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.327902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.339311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.339334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.352673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.352692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.367458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.367476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.383654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.383672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.399503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.399521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.411382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.411401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.425062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.425081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.440379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.440398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.455468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.455486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.471367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.751 [2024-11-19 11:42:39.471386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.751 [2024-11-19 11:42:39.486423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.752 [2024-11-19 11:42:39.486443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.752 [2024-11-19 11:42:39.500939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.752 [2024-11-19 11:42:39.500963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.752 [2024-11-19 11:42:39.515963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.752 [2024-11-19 11:42:39.515998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.530721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.530740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.543682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.543701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.555367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.555386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.568249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.568268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.583885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.583904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.594294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.594313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.609089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.609113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.624029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.624049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.639136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.639156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.650734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.650754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.665210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.665229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.680611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.680629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.010 [2024-11-19 11:42:39.696024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.010 [2024-11-19 11:42:39.696043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.011 [2024-11-19 11:42:39.711004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.011 [2024-11-19 11:42:39.711024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.011 [2024-11-19 11:42:39.722618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.011 [2024-11-19 11:42:39.722637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.011 [2024-11-19 11:42:39.736901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.011 [2024-11-19 11:42:39.736921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.011 [2024-11-19 11:42:39.752249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.011 [2024-11-19 11:42:39.752269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.011 [2024-11-19 11:42:39.767256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.011 [2024-11-19 11:42:39.767275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.011 [2024-11-19 11:42:39.783406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.011 [2024-11-19 11:42:39.783425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.799669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.799688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.814835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.814854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.826695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.826714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.841058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.841077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.856136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.856155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.870933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.870959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.885055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.885079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.900030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.900049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.915170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.915190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.926689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.926708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.940880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.940900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.955785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.955804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.970687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.970706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.982360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.982380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:39.997000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:39.997019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:40.012521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:40.012541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:40.029074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:40.029095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.270 [2024-11-19 11:42:40.044307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.270 [2024-11-19 11:42:40.044326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.059261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.059281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.072680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.072699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.087699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.087718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.098559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.098578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.113168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.113187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.128568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.128588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.143900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.143919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.159739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.159758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 16225.50 IOPS, 126.76 MiB/s [2024-11-19T10:42:40.311Z] [2024-11-19 11:42:40.174835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.174855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.189190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.189209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.204299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.204318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.219198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.219218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.231117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.231136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.245120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.245140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.260273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.260292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.275437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.275456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.290496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.290515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.530 [2024-11-19 11:42:40.305061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.530 [2024-11-19 11:42:40.305080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.320162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.320180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.335099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.335117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.348121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.348150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.358887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.358906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.373091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.373109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.388781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.388800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.404016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.404035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.419217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.419235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.434338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.434357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.446352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.446371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.460957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.460976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.476171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.476190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.491303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.491322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.507398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.507417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.520084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.520102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.535062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.535081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.546015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.546034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.790 [2024-11-19 11:42:40.560682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.790 [2024-11-19 11:42:40.560701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.114 [2024-11-19 11:42:40.575653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.114 [2024-11-19 11:42:40.575671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.114 [2024-11-19 11:42:40.590311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.114 [2024-11-19 11:42:40.590330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.114 [2024-11-19 11:42:40.605142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.114 [2024-11-19 11:42:40.605161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.114 [2024-11-19 11:42:40.619777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.114 [2024-11-19 11:42:40.619795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.114 [2024-11-19 11:42:40.631734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.114 [2024-11-19 11:42:40.631753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.114 [2024-11-19 11:42:40.646917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.114 [2024-11-19 11:42:40.646936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.114 [2024-11-19 11:42:40.658719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.114 [2024-11-19 11:42:40.658738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.114 [2024-11-19 11:42:40.672888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.114 [2024-11-19 11:42:40.672907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.114 [2024-11-19 11:42:40.687883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.114 [2024-11-19 11:42:40.687902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.114 [2024-11-19 11:42:40.699041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.114 [2024-11-19 11:42:40.699060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.115 [2024-11-19 11:42:40.713508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.115 [2024-11-19 11:42:40.713528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.115 [2024-11-19 11:42:40.728914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.115 [2024-11-19 11:42:40.728933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.115 [2024-11-19 11:42:40.743783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.115 [2024-11-19 11:42:40.743802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.115 [2024-11-19 11:42:40.759371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.115 [2024-11-19 11:42:40.759390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.115 [2024-11-19 11:42:40.775507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.115 [2024-11-19 11:42:40.775526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.115 [2024-11-19 11:42:40.790615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.115 [2024-11-19 11:42:40.790634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.115 [2024-11-19 11:42:40.802039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.115 [2024-11-19 11:42:40.802057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.115 [2024-11-19 11:42:40.817087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.115 [2024-11-19 11:42:40.817105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.115 [2024-11-19 11:42:40.832644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.115 [2024-11-19 11:42:40.832664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.115 [2024-11-19 11:42:40.847610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.115 [2024-11-19 11:42:40.847630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:40.862855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:40.862876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:40.876154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:40.876172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:40.891442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:40.891461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:40.907243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:40.907262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:40.920524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:40.920543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:40.936195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:40.936213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:40.951132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:40.951151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:40.965055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:40.965079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:40.980532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:40.980551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:40.996008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:40.996027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.011040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.011059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.024607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.024626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.039991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.040009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.054460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.054480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.069029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.069047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.084101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.084119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.099379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.099397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.114792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.114811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.129322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.129340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.144359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.144377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.159452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.159470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.421 [2024-11-19 11:42:41.174805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.421 [2024-11-19 11:42:41.174826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 16250.00 IOPS, 126.95 MiB/s [2024-11-19T10:42:41.476Z] [2024-11-19 11:42:41.186884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.186904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.201343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.201363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.217027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.217047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.232121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.232139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.247397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.247419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.263488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.263508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.279330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.279349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.290298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.290317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.304457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.304476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.319680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.319698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.334846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.334867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.349249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.349270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.364487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.364506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.379677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.379695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.394520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.394539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.409179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.409198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.424933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.424957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.439695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.439714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.455125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.455145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.695 [2024-11-19 11:42:41.467115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.695 [2024-11-19 11:42:41.467135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.480233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.480252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.490291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.490311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.505125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.505144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.520370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.520393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.535436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.535455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.551910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.551929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.566952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.566971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.580008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.580027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.591223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.591241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.604803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.604823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.620019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.620038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.635098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.635119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.647957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.647976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.659614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.659633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.672640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.672660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.687512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.687532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.703162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.703183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.714370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.714391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.956 [2024-11-19 11:42:41.728734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.956 [2024-11-19 11:42:41.728754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.216 [2024-11-19 11:42:41.743750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.216 [2024-11-19 11:42:41.743769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.216 [2024-11-19 11:42:41.759112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.216 [2024-11-19 11:42:41.759132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.216 [2024-11-19 11:42:41.771953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.216 [2024-11-19 11:42:41.771972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.216 [2024-11-19 11:42:41.787177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.216 [2024-11-19 11:42:41.787198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.216 [2024-11-19 11:42:41.798516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.216 [2024-11-19 11:42:41.798535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.216 [2024-11-19 11:42:41.813335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.216 [2024-11-19 11:42:41.813355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.216 [2024-11-19 11:42:41.828644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.828664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.217 [2024-11-19 11:42:41.843847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.843866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.217 [2024-11-19 11:42:41.854873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.854893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.217 [2024-11-19 11:42:41.868777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.868797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.217 [2024-11-19 11:42:41.884180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.884201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.217 [2024-11-19 11:42:41.899286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.899305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.217 [2024-11-19 11:42:41.914622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.914642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.217 [2024-11-19 11:42:41.928878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.928899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.217 [2024-11-19 11:42:41.944480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.944500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.217 [2024-11-19 11:42:41.959508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.959527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.217 [2024-11-19 11:42:41.974719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.974739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.217 [2024-11-19 11:42:41.987782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.217 [2024-11-19 11:42:41.987802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.476 [2024-11-19 11:42:42.002979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.476 [2024-11-19 11:42:42.002999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.476 [2024-11-19 11:42:42.015911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.476 [2024-11-19 11:42:42.015931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.476 [2024-11-19 11:42:42.031246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.476 [2024-11-19 11:42:42.031266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.476 [2024-11-19 11:42:42.046840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.476 [2024-11-19 11:42:42.046861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.061036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.061056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.076312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.076332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.091167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.091198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.103785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.103804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.119095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.119114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.132302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.132320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.143518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.143537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.156839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.156858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.172445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.172464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 16259.00 IOPS, 127.02 MiB/s [2024-11-19T10:42:42.258Z] [2024-11-19 11:42:42.187650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.187669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.203069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.203089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.214760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.214779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.229348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.229367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.477 [2024-11-19 11:42:42.244933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.477 [2024-11-19 11:42:42.244958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.260207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.260226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.275139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.275159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.286165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.286196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.300867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.300887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.315920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.315953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.330766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.330786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.345606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.345625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.360294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.360313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.375391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.375409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.391936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.391962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.407578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.407597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.419209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.419227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.432875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.432894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.447934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.447958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.462978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.462997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.476732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.476750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.491697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.491716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.737 [2024-11-19 11:42:42.506934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.737 [2024-11-19 11:42:42.506959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.997 [2024-11-19 11:42:42.520943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.997 [2024-11-19 11:42:42.520968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.997 [2024-11-19 11:42:42.538163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.997 [2024-11-19 11:42:42.538183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.997 [2024-11-19 11:42:42.552166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.997 [2024-11-19 11:42:42.552185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.997 [2024-11-19 11:42:42.563381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.997 [2024-11-19 11:42:42.563399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.997 [2024-11-19 11:42:42.576863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.997 [2024-11-19 11:42:42.576883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.997 [2024-11-19 11:42:42.592589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.997 [2024-11-19 11:42:42.592613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.997 [2024-11-19 11:42:42.607258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.997 [2024-11-19 11:42:42.607277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.997 [2024-11-19 11:42:42.619037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.998 [2024-11-19 11:42:42.619057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.998 [2024-11-19 11:42:42.632859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.998 [2024-11-19 11:42:42.632879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.998 [2024-11-19 11:42:42.648518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.998 [2024-11-19 11:42:42.648537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.998 [2024-11-19 11:42:42.663787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.998 [2024-11-19 11:42:42.663806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.998 [2024-11-19 11:42:42.678598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.998 [2024-11-19 11:42:42.678617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.998 [2024-11-19 11:42:42.693157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.998 [2024-11-19 11:42:42.693176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.998 [2024-11-19 11:42:42.708314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.998 [2024-11-19 11:42:42.708333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.998 [2024-11-19 11:42:42.723337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.998 [2024-11-19 11:42:42.723357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.998 [2024-11-19 11:42:42.739163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.998 [2024-11-19 11:42:42.739184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.998 [2024-11-19 11:42:42.749988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.998 [2024-11-19 11:42:42.750007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.998 [2024-11-19 11:42:42.765153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.998 [2024-11-19 11:42:42.765173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.780402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.780420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.795280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.795299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.811345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.811364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.826953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.826972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.840700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.840719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.855945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.855971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.870573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.870597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.884439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.884458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.899323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.899341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.912580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.912599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.927722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.927742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.258 [2024-11-19 11:42:42.943395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.258 [2024-11-19 11:42:42.943413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.259 [2024-11-19 11:42:42.959073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.259 [2024-11-19 11:42:42.959092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.259 [2024-11-19 11:42:42.972597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.259 [2024-11-19 11:42:42.972615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.259 [2024-11-19 11:42:42.987967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.259 [2024-11-19 11:42:42.987986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.259 [2024-11-19 11:42:43.002734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.259 [2024-11-19 11:42:43.002754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.259 [2024-11-19 11:42:43.014332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.259 [2024-11-19 11:42:43.014351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.259 [2024-11-19 11:42:43.029006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.259 [2024-11-19 11:42:43.029025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.044050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.044070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.058995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.059015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.071981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.072000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.087487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.087505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.103660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.103679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.118617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.118637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.130457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.130476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.145212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.145237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.159898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.159918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.170295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.170315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 16278.60 IOPS, 127.18 MiB/s [2024-11-19T10:42:43.300Z] [2024-11-19 11:42:43.183789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.183809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 00:31:29.519 Latency(us) 00:31:29.519 [2024-11-19T10:42:43.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.519 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:29.519 Nvme1n1 : 5.01 16280.95 127.19 0.00 0.00 7854.18 2080.06 14588.88 00:31:29.519 [2024-11-19T10:42:43.300Z] =================================================================================================================== 00:31:29.519 [2024-11-19T10:42:43.300Z] Total : 16280.95 127.19 0.00 0.00 7854.18 2080.06 14588.88 00:31:29.519 [2024-11-19 11:42:43.194990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.195008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.206926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.206940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.218938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.218961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.230926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.230941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.242927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.242941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.254927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.254943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.266926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.266941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.519 [2024-11-19 11:42:43.278922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.519 [2024-11-19 11:42:43.278936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.520 [2024-11-19 11:42:43.290924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.520 [2024-11-19 11:42:43.290937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.780 [2024-11-19 11:42:43.302920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.780 [2024-11-19 11:42:43.302931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.780 [2024-11-19 11:42:43.314927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.780 [2024-11-19 11:42:43.314940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.780 [2024-11-19 11:42:43.326921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.780 [2024-11-19 11:42:43.326932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.780 [2024-11-19 11:42:43.338922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.780 [2024-11-19 11:42:43.338933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2480027) - No such process 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2480027 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:29.780 delay0 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.780 11:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:29.780 [2024-11-19 11:42:43.445741] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:37.901 Initializing NVMe Controllers 00:31:37.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.901 Initialization complete. Launching workers. 00:31:37.901 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 292, failed: 15577 00:31:37.901 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15796, failed to submit 73 00:31:37.901 success 15685, unsuccessful 111, failed 0 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.901 rmmod nvme_tcp 00:31:37.901 rmmod nvme_fabrics 00:31:37.901 rmmod nvme_keyring 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2478186 ']' 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2478186 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2478186 ']' 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2478186 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478186 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478186' 00:31:37.901 killing process with pid 2478186 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2478186 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2478186 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.901 11:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.282 11:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:39.282 00:31:39.282 real 0m32.237s 00:31:39.282 user 0m41.692s 00:31:39.282 sys 0m12.868s 00:31:39.282 11:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:39.282 11:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.282 ************************************ 00:31:39.282 END TEST nvmf_zcopy 00:31:39.282 ************************************ 00:31:39.282 11:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:39.282 11:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:39.282 11:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:39.282 11:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:39.282 ************************************ 00:31:39.282 START TEST nvmf_nmic 00:31:39.282 ************************************ 00:31:39.282 11:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:39.282 * Looking for test storage... 00:31:39.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:39.282 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:39.282 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:39.282 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:39.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.543 --rc genhtml_branch_coverage=1 00:31:39.543 --rc genhtml_function_coverage=1 00:31:39.543 --rc genhtml_legend=1 00:31:39.543 --rc geninfo_all_blocks=1 00:31:39.543 --rc geninfo_unexecuted_blocks=1 00:31:39.543 00:31:39.543 ' 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:39.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.543 --rc genhtml_branch_coverage=1 00:31:39.543 --rc genhtml_function_coverage=1 00:31:39.543 --rc genhtml_legend=1 00:31:39.543 --rc geninfo_all_blocks=1 00:31:39.543 --rc geninfo_unexecuted_blocks=1 00:31:39.543 00:31:39.543 ' 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:39.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.543 --rc genhtml_branch_coverage=1 00:31:39.543 --rc genhtml_function_coverage=1 00:31:39.543 --rc genhtml_legend=1 00:31:39.543 --rc geninfo_all_blocks=1 00:31:39.543 --rc geninfo_unexecuted_blocks=1 00:31:39.543 00:31:39.543 ' 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:39.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.543 --rc genhtml_branch_coverage=1 00:31:39.543 --rc genhtml_function_coverage=1 00:31:39.543 --rc genhtml_legend=1 00:31:39.543 --rc geninfo_all_blocks=1 00:31:39.543 --rc geninfo_unexecuted_blocks=1 00:31:39.543 00:31:39.543 ' 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:39.543 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:39.544 11:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:46.118 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:46.118 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:46.118 Found net devices under 0000:86:00.0: cvl_0_0 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:46.118 Found net devices under 0000:86:00.1: cvl_0_1 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:46.118 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:46.119 11:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:46.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:31:46.119 00:31:46.119 --- 10.0.0.2 ping statistics --- 00:31:46.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.119 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:31:46.119 00:31:46.119 --- 10.0.0.1 ping statistics --- 00:31:46.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.119 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2485391 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2485391 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2485391 ']' 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.119 [2024-11-19 11:42:59.137798] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:46.119 [2024-11-19 11:42:59.138735] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:31:46.119 [2024-11-19 11:42:59.138768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.119 [2024-11-19 11:42:59.219064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:46.119 [2024-11-19 11:42:59.263179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.119 [2024-11-19 11:42:59.263219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.119 [2024-11-19 11:42:59.263229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.119 [2024-11-19 11:42:59.263237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.119 [2024-11-19 11:42:59.263244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.119 [2024-11-19 11:42:59.264858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.119 [2024-11-19 11:42:59.264982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.119 [2024-11-19 11:42:59.265035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.119 [2024-11-19 11:42:59.265036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:46.119 [2024-11-19 11:42:59.332912] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:46.119 [2024-11-19 11:42:59.333337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:46.119 [2024-11-19 11:42:59.333860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:46.119 [2024-11-19 11:42:59.334165] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:46.119 [2024-11-19 11:42:59.334220] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.119 [2024-11-19 11:42:59.405869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.119 Malloc0 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.119 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.120 [2024-11-19 11:42:59.494144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:46.120 test case1: single bdev can't be used in multiple subsystems 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.120 [2024-11-19 11:42:59.525577] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:46.120 [2024-11-19 11:42:59.525601] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:46.120 [2024-11-19 11:42:59.525612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.120 request: 00:31:46.120 { 00:31:46.120 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:46.120 "namespace": { 00:31:46.120 "bdev_name": "Malloc0", 00:31:46.120 "no_auto_visible": false 00:31:46.120 }, 00:31:46.120 "method": "nvmf_subsystem_add_ns", 00:31:46.120 "req_id": 1 00:31:46.120 } 00:31:46.120 Got JSON-RPC error response 00:31:46.120 response: 00:31:46.120 { 00:31:46.120 "code": -32602, 00:31:46.120 "message": "Invalid parameters" 00:31:46.120 } 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:46.120 Adding namespace failed - expected result. 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:46.120 test case2: host connect to nvmf target in multiple paths 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.120 [2024-11-19 11:42:59.537691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:46.120 11:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:46.379 11:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:46.379 11:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:46.379 11:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:46.379 11:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:46.379 11:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:48.284 11:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:48.284 11:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:48.284 11:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:48.558 11:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:48.558 11:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:48.558 11:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:48.558 11:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:48.558 [global] 00:31:48.558 thread=1 00:31:48.558 invalidate=1 00:31:48.558 rw=write 00:31:48.558 time_based=1 00:31:48.558 runtime=1 00:31:48.558 ioengine=libaio 00:31:48.558 direct=1 00:31:48.558 bs=4096 00:31:48.558 iodepth=1 00:31:48.558 norandommap=0 00:31:48.558 numjobs=1 00:31:48.558 00:31:48.558 verify_dump=1 00:31:48.558 verify_backlog=512 00:31:48.558 verify_state_save=0 00:31:48.558 do_verify=1 00:31:48.558 verify=crc32c-intel 00:31:48.558 [job0] 00:31:48.558 filename=/dev/nvme0n1 00:31:48.558 Could not set queue depth (nvme0n1) 00:31:48.815 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:48.815 fio-3.35 00:31:48.815 Starting 1 thread 00:31:49.746 00:31:49.746 job0: (groupid=0, jobs=1): err= 0: pid=2486106: Tue Nov 19 11:43:03 2024 00:31:49.746 read: IOPS=2132, BW=8531KiB/s (8736kB/s)(8540KiB/1001msec) 00:31:49.746 slat (nsec): min=8066, max=45297, avg=9074.68, stdev=1797.94 00:31:49.746 clat (usec): min=196, max=40783, avg=256.10, stdev=877.64 00:31:49.746 lat (usec): min=205, max=40794, avg=265.17, stdev=877.68 00:31:49.746 clat percentiles (usec): 00:31:49.746 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:31:49.746 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:31:49.746 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 253], 00:31:49.746 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 363], 99.95th=[ 383], 00:31:49.746 | 99.99th=[40633] 00:31:49.746 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:49.746 slat (nsec): min=11207, max=47092, avg=12438.41, stdev=1809.50 00:31:49.746 clat (usec): min=121, max=322, avg=151.28, stdev=18.41 00:31:49.746 lat (usec): min=142, max=334, avg=163.71, stdev=18.64 00:31:49.746 clat percentiles (usec): 00:31:49.746 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:31:49.746 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 145], 60.00th=[ 147], 00:31:49.746 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 188], 95.00th=[ 192], 00:31:49.746 | 99.00th=[ 202], 99.50th=[ 241], 99.90th=[ 260], 99.95th=[ 269], 00:31:49.746 | 99.99th=[ 322] 00:31:49.746 bw ( KiB/s): min= 8944, max= 8944, per=87.43%, avg=8944.00, stdev= 0.00, samples=1 00:31:49.746 iops : min= 2236, max= 2236, avg=2236.00, stdev= 0.00, samples=1 00:31:49.746 lat (usec) : 250=95.06%, 500=4.92% 00:31:49.746 lat (msec) : 50=0.02% 00:31:49.746 cpu : usr=4.00%, sys=8.10%, ctx=4695, majf=0, minf=1 00:31:49.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.746 issued rwts: total=2135,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:49.746 00:31:49.746 Run status group 0 (all jobs): 00:31:49.746 READ: bw=8531KiB/s (8736kB/s), 8531KiB/s-8531KiB/s (8736kB/s-8736kB/s), io=8540KiB (8745kB), run=1001-1001msec 00:31:49.746 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:31:49.746 00:31:49.746 Disk stats (read/write): 00:31:49.746 nvme0n1: ios=2098/2056, merge=0/0, ticks=533/297, in_queue=830, util=91.38% 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:50.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.004 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.004 rmmod nvme_tcp 00:31:50.004 rmmod nvme_fabrics 00:31:50.004 rmmod nvme_keyring 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2485391 ']' 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2485391 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2485391 ']' 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2485391 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485391 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485391' 00:31:50.262 killing process with pid 2485391 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2485391 00:31:50.262 11:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2485391 00:31:50.262 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:50.262 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:50.262 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:50.262 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:50.262 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:50.262 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:50.262 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:50.521 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:50.521 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:50.521 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.521 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.521 11:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.427 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:52.427 00:31:52.427 real 0m13.144s 00:31:52.427 user 0m24.032s 00:31:52.427 sys 0m6.122s 00:31:52.427 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.427 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:52.427 ************************************ 00:31:52.427 END TEST nvmf_nmic 00:31:52.427 ************************************ 00:31:52.427 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:52.427 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:52.427 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.427 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:52.427 ************************************ 00:31:52.427 START TEST nvmf_fio_target 00:31:52.427 ************************************ 00:31:52.427 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:52.687 * Looking for test storage... 00:31:52.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:52.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.687 --rc genhtml_branch_coverage=1 00:31:52.687 --rc genhtml_function_coverage=1 00:31:52.687 --rc genhtml_legend=1 00:31:52.687 --rc geninfo_all_blocks=1 00:31:52.687 --rc geninfo_unexecuted_blocks=1 00:31:52.687 00:31:52.687 ' 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:52.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.687 --rc genhtml_branch_coverage=1 00:31:52.687 --rc genhtml_function_coverage=1 00:31:52.687 --rc genhtml_legend=1 00:31:52.687 --rc geninfo_all_blocks=1 00:31:52.687 --rc geninfo_unexecuted_blocks=1 00:31:52.687 00:31:52.687 ' 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:52.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.687 --rc genhtml_branch_coverage=1 00:31:52.687 --rc genhtml_function_coverage=1 00:31:52.687 --rc genhtml_legend=1 00:31:52.687 --rc geninfo_all_blocks=1 00:31:52.687 --rc geninfo_unexecuted_blocks=1 00:31:52.687 00:31:52.687 ' 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:52.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.687 --rc genhtml_branch_coverage=1 00:31:52.687 --rc genhtml_function_coverage=1 00:31:52.687 --rc genhtml_legend=1 00:31:52.687 --rc geninfo_all_blocks=1 00:31:52.687 --rc geninfo_unexecuted_blocks=1 00:31:52.687 00:31:52.687 ' 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:52.687 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:52.688 11:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:59.259 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:59.259 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:59.259 Found net devices under 0000:86:00.0: cvl_0_0 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:59.259 Found net devices under 0000:86:00.1: cvl_0_1 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.259 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:31:59.260 00:31:59.260 --- 10.0.0.2 ping statistics --- 00:31:59.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.260 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:31:59.260 00:31:59.260 --- 10.0.0.1 ping statistics --- 00:31:59.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.260 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2489763 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2489763 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2489763 ']' 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.260 [2024-11-19 11:43:12.400874] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.260 [2024-11-19 11:43:12.401810] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:31:59.260 [2024-11-19 11:43:12.401844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.260 [2024-11-19 11:43:12.482353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:59.260 [2024-11-19 11:43:12.527147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.260 [2024-11-19 11:43:12.527181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.260 [2024-11-19 11:43:12.527191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.260 [2024-11-19 11:43:12.527199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.260 [2024-11-19 11:43:12.527205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.260 [2024-11-19 11:43:12.528907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.260 [2024-11-19 11:43:12.529019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.260 [2024-11-19 11:43:12.529058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.260 [2024-11-19 11:43:12.529059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.260 [2024-11-19 11:43:12.596629] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.260 [2024-11-19 11:43:12.597108] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:59.260 [2024-11-19 11:43:12.597564] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:59.260 [2024-11-19 11:43:12.597856] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:59.260 [2024-11-19 11:43:12.597900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:59.260 [2024-11-19 11:43:12.837899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.260 11:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.520 11:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:59.520 11:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.780 11:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:59.780 11:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.780 11:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:59.780 11:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.039 11:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:00.039 11:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:00.298 11:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.557 11:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:00.557 11:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.817 11:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:00.817 11:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.817 11:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:00.817 11:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:01.076 11:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:01.335 11:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:01.335 11:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:01.593 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:01.593 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:01.593 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:01.851 [2024-11-19 11:43:15.541830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.851 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:02.109 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:02.368 11:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:02.626 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:02.626 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:02.626 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:02.626 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:02.626 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:02.626 11:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:04.529 11:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:04.529 11:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:04.529 11:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:04.529 11:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:04.529 11:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:04.529 11:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:04.529 11:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:04.529 [global] 00:32:04.529 thread=1 00:32:04.529 invalidate=1 00:32:04.529 rw=write 00:32:04.529 time_based=1 00:32:04.529 runtime=1 00:32:04.529 ioengine=libaio 00:32:04.529 direct=1 00:32:04.529 bs=4096 00:32:04.529 iodepth=1 00:32:04.529 norandommap=0 00:32:04.529 numjobs=1 00:32:04.529 00:32:04.529 verify_dump=1 00:32:04.529 verify_backlog=512 00:32:04.529 verify_state_save=0 00:32:04.529 do_verify=1 00:32:04.529 verify=crc32c-intel 00:32:04.529 [job0] 00:32:04.529 filename=/dev/nvme0n1 00:32:04.529 [job1] 00:32:04.529 filename=/dev/nvme0n2 00:32:04.529 [job2] 00:32:04.529 filename=/dev/nvme0n3 00:32:04.529 [job3] 00:32:04.529 filename=/dev/nvme0n4 00:32:04.788 Could not set queue depth (nvme0n1) 00:32:04.788 Could not set queue depth (nvme0n2) 00:32:04.788 Could not set queue depth (nvme0n3) 00:32:04.788 Could not set queue depth (nvme0n4) 00:32:05.046 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.046 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.046 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.046 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.046 fio-3.35 00:32:05.046 Starting 4 threads 00:32:06.419 00:32:06.419 job0: (groupid=0, jobs=1): err= 0: pid=2491021: Tue Nov 19 11:43:19 2024 00:32:06.419 read: IOPS=1008, BW=4035KiB/s (4132kB/s)(4144KiB/1027msec) 00:32:06.419 slat (nsec): min=7235, max=37298, avg=8397.81, stdev=2028.34 00:32:06.419 clat (usec): min=183, max=41012, avg=680.82, stdev=4174.92 00:32:06.419 lat (usec): min=191, max=41036, avg=689.22, stdev=4176.41 00:32:06.419 clat percentiles (usec): 00:32:06.419 | 1.00th=[ 190], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 231], 00:32:06.419 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:32:06.419 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 297], 00:32:06.419 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:06.419 | 99.99th=[41157] 00:32:06.419 write: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec); 0 zone resets 00:32:06.419 slat (nsec): min=10649, max=62450, avg=12174.91, stdev=2380.45 00:32:06.419 clat (usec): min=136, max=330, avg=186.39, stdev=31.89 00:32:06.419 lat (usec): min=147, max=374, avg=198.56, stdev=32.17 00:32:06.419 clat percentiles (usec): 00:32:06.419 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:32:06.419 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 186], 00:32:06.419 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 229], 95.00th=[ 269], 00:32:06.419 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 314], 99.95th=[ 330], 00:32:06.419 | 99.99th=[ 330] 00:32:06.419 bw ( KiB/s): min= 3240, max= 9048, per=31.52%, avg=6144.00, stdev=4106.88, samples=2 00:32:06.419 iops : min= 810, max= 2262, avg=1536.00, stdev=1026.72, samples=2 00:32:06.419 lat (usec) : 250=80.40%, 500=19.17% 00:32:06.419 lat (msec) : 50=0.43% 00:32:06.419 cpu : usr=2.24%, sys=4.00%, ctx=2573, majf=0, minf=1 00:32:06.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.419 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.419 job1: (groupid=0, jobs=1): err= 0: pid=2491036: Tue Nov 19 11:43:19 2024 00:32:06.419 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:32:06.419 slat (nsec): min=6586, max=33644, avg=8674.74, stdev=2216.35 00:32:06.419 clat (usec): min=180, max=41390, avg=679.09, stdev=4206.16 00:32:06.419 lat (usec): min=187, max=41398, avg=687.77, stdev=4206.22 00:32:06.419 clat percentiles (usec): 00:32:06.419 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 219], 00:32:06.419 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:32:06.419 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 289], 00:32:06.419 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:06.419 | 99.99th=[41157] 00:32:06.419 write: IOPS=1419, BW=5678KiB/s (5815kB/s)(5684KiB/1001msec); 0 zone resets 00:32:06.419 slat (usec): min=10, max=4641, avg=15.36, stdev=122.81 00:32:06.419 clat (usec): min=131, max=367, avg=187.55, stdev=36.18 00:32:06.419 lat (usec): min=143, max=4874, avg=202.91, stdev=129.26 00:32:06.419 clat percentiles (usec): 00:32:06.419 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 153], 00:32:06.419 | 30.00th=[ 159], 40.00th=[ 174], 50.00th=[ 188], 60.00th=[ 194], 00:32:06.419 | 70.00th=[ 198], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 245], 00:32:06.419 | 99.00th=[ 262], 99.50th=[ 293], 99.90th=[ 359], 99.95th=[ 367], 00:32:06.419 | 99.99th=[ 367] 00:32:06.419 bw ( KiB/s): min= 8192, max= 8192, per=42.02%, avg=8192.00, stdev= 0.00, samples=1 00:32:06.419 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:06.419 lat (usec) : 250=85.03%, 500=14.48%, 750=0.04% 00:32:06.419 lat (msec) : 50=0.45% 00:32:06.419 cpu : usr=2.40%, sys=3.60%, ctx=2447, majf=0, minf=2 00:32:06.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.419 issued rwts: total=1024,1421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.419 job2: (groupid=0, jobs=1): err= 0: pid=2491055: Tue Nov 19 11:43:19 2024 00:32:06.419 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:32:06.419 slat (nsec): min=11028, max=25245, avg=24011.91, stdev=2926.93 00:32:06.419 clat (usec): min=40905, max=41977, avg=41069.07, stdev=286.59 00:32:06.419 lat (usec): min=40929, max=42002, avg=41093.09, stdev=286.47 00:32:06.419 clat percentiles (usec): 00:32:06.419 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:06.419 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:06.419 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:06.419 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:06.419 | 99.99th=[42206] 00:32:06.419 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:32:06.419 slat (usec): min=10, max=11866, avg=35.88, stdev=523.87 00:32:06.419 clat (usec): min=150, max=282, avg=179.04, stdev=17.72 00:32:06.419 lat (usec): min=161, max=12057, avg=214.92, stdev=524.72 00:32:06.419 clat percentiles (usec): 00:32:06.419 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:32:06.419 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 180], 00:32:06.419 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 212], 00:32:06.419 | 99.00th=[ 237], 99.50th=[ 247], 99.90th=[ 281], 99.95th=[ 281], 00:32:06.419 | 99.99th=[ 281] 00:32:06.419 bw ( KiB/s): min= 4096, max= 4096, per=21.01%, avg=4096.00, stdev= 0.00, samples=1 00:32:06.419 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:06.419 lat (usec) : 250=95.69%, 500=0.19% 00:32:06.419 lat (msec) : 50=4.12% 00:32:06.419 cpu : usr=0.49%, sys=0.89%, ctx=536, majf=0, minf=1 00:32:06.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.419 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.419 job3: (groupid=0, jobs=1): err= 0: pid=2491061: Tue Nov 19 11:43:19 2024 00:32:06.419 read: IOPS=1252, BW=5012KiB/s (5132kB/s)(5132KiB/1024msec) 00:32:06.419 slat (nsec): min=6458, max=30336, avg=7551.74, stdev=1750.24 00:32:06.419 clat (usec): min=190, max=41436, avg=567.08, stdev=3770.53 00:32:06.419 lat (usec): min=199, max=41443, avg=574.63, stdev=3770.93 00:32:06.419 clat percentiles (usec): 00:32:06.419 | 1.00th=[ 196], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 202], 00:32:06.419 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:32:06.419 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 258], 00:32:06.419 | 99.00th=[ 281], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:32:06.419 | 99.99th=[41681] 00:32:06.419 write: IOPS=1500, BW=6000KiB/s (6144kB/s)(6144KiB/1024msec); 0 zone resets 00:32:06.419 slat (nsec): min=9543, max=41525, avg=10640.89, stdev=1386.34 00:32:06.419 clat (usec): min=129, max=424, avg=171.95, stdev=35.50 00:32:06.419 lat (usec): min=139, max=465, avg=182.59, stdev=35.66 00:32:06.419 clat percentiles (usec): 00:32:06.419 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:32:06.419 | 30.00th=[ 145], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:32:06.419 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 241], 95.00th=[ 243], 00:32:06.419 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 371], 99.95th=[ 424], 00:32:06.419 | 99.99th=[ 424] 00:32:06.419 bw ( KiB/s): min= 4936, max= 7352, per=31.52%, avg=6144.00, stdev=1708.37, samples=2 00:32:06.419 iops : min= 1234, max= 1838, avg=1536.00, stdev=427.09, samples=2 00:32:06.419 lat (usec) : 250=96.28%, 500=3.33% 00:32:06.419 lat (msec) : 50=0.39% 00:32:06.419 cpu : usr=1.27%, sys=2.64%, ctx=2820, majf=0, minf=1 00:32:06.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.419 issued rwts: total=1283,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.419 00:32:06.419 Run status group 0 (all jobs): 00:32:06.419 READ: bw=12.8MiB/s (13.4MB/s), 86.6KiB/s-5012KiB/s (88.7kB/s-5132kB/s), io=13.1MiB (13.8MB), run=1001-1027msec 00:32:06.419 WRITE: bw=19.0MiB/s (20.0MB/s), 2016KiB/s-6000KiB/s (2064kB/s-6144kB/s), io=19.6MiB (20.5MB), run=1001-1027msec 00:32:06.419 00:32:06.419 Disk stats (read/write): 00:32:06.419 nvme0n1: ios=1053/1536, merge=0/0, ticks=1351/267, in_queue=1618, util=85.07% 00:32:06.419 nvme0n2: ios=939/1024, merge=0/0, ticks=1528/185, in_queue=1713, util=89.21% 00:32:06.419 nvme0n3: ios=66/512, merge=0/0, ticks=921/89, in_queue=1010, util=94.23% 00:32:06.419 nvme0n4: ios=1116/1536, merge=0/0, ticks=1423/257, in_queue=1680, util=94.29% 00:32:06.419 11:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:06.419 [global] 00:32:06.419 thread=1 00:32:06.419 invalidate=1 00:32:06.419 rw=randwrite 00:32:06.419 time_based=1 00:32:06.419 runtime=1 00:32:06.419 ioengine=libaio 00:32:06.419 direct=1 00:32:06.419 bs=4096 00:32:06.419 iodepth=1 00:32:06.419 norandommap=0 00:32:06.419 numjobs=1 00:32:06.419 00:32:06.419 verify_dump=1 00:32:06.419 verify_backlog=512 00:32:06.419 verify_state_save=0 00:32:06.419 do_verify=1 00:32:06.420 verify=crc32c-intel 00:32:06.420 [job0] 00:32:06.420 filename=/dev/nvme0n1 00:32:06.420 [job1] 00:32:06.420 filename=/dev/nvme0n2 00:32:06.420 [job2] 00:32:06.420 filename=/dev/nvme0n3 00:32:06.420 [job3] 00:32:06.420 filename=/dev/nvme0n4 00:32:06.420 Could not set queue depth (nvme0n1) 00:32:06.420 Could not set queue depth (nvme0n2) 00:32:06.420 Could not set queue depth (nvme0n3) 00:32:06.420 Could not set queue depth (nvme0n4) 00:32:06.420 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.420 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.420 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.420 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.420 fio-3.35 00:32:06.420 Starting 4 threads 00:32:07.794 00:32:07.794 job0: (groupid=0, jobs=1): err= 0: pid=2491466: Tue Nov 19 11:43:21 2024 00:32:07.794 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:32:07.794 slat (nsec): min=9329, max=23012, avg=21798.36, stdev=2805.11 00:32:07.794 clat (usec): min=40808, max=41971, avg=41072.53, stdev=293.36 00:32:07.794 lat (usec): min=40830, max=41993, avg=41094.33, stdev=292.73 00:32:07.794 clat percentiles (usec): 00:32:07.794 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:07.794 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:07.794 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:07.794 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:07.794 | 99.99th=[42206] 00:32:07.794 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:32:07.794 slat (nsec): min=9022, max=40241, avg=9889.35, stdev=1554.93 00:32:07.794 clat (usec): min=151, max=388, avg=178.06, stdev=16.40 00:32:07.794 lat (usec): min=160, max=428, avg=187.95, stdev=17.19 00:32:07.794 clat percentiles (usec): 00:32:07.794 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 167], 00:32:07.794 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:32:07.794 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 202], 00:32:07.794 | 99.00th=[ 221], 99.50th=[ 273], 99.90th=[ 388], 99.95th=[ 388], 00:32:07.794 | 99.99th=[ 388] 00:32:07.794 bw ( KiB/s): min= 4096, max= 4096, per=12.58%, avg=4096.00, stdev= 0.00, samples=1 00:32:07.794 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:07.794 lat (usec) : 250=95.32%, 500=0.56% 00:32:07.794 lat (msec) : 50=4.12% 00:32:07.794 cpu : usr=0.40%, sys=0.40%, ctx=535, majf=0, minf=2 00:32:07.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.794 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:07.794 job1: (groupid=0, jobs=1): err= 0: pid=2491467: Tue Nov 19 11:43:21 2024 00:32:07.795 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:07.795 slat (nsec): min=7200, max=24848, avg=8478.65, stdev=1317.75 00:32:07.795 clat (usec): min=195, max=694, avg=250.11, stdev=35.21 00:32:07.795 lat (usec): min=203, max=703, avg=258.59, stdev=35.25 00:32:07.795 clat percentiles (usec): 00:32:07.795 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 225], 00:32:07.795 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:32:07.795 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:32:07.795 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 537], 99.95th=[ 685], 00:32:07.795 | 99.99th=[ 693] 00:32:07.795 write: IOPS=2520, BW=9.84MiB/s (10.3MB/s)(9.86MiB/1001msec); 0 zone resets 00:32:07.795 slat (nsec): min=10468, max=35132, avg=12170.20, stdev=1756.89 00:32:07.795 clat (usec): min=139, max=910, avg=168.64, stdev=25.01 00:32:07.795 lat (usec): min=151, max=924, avg=180.81, stdev=25.19 00:32:07.795 clat percentiles (usec): 00:32:07.795 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:32:07.795 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:32:07.795 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:32:07.795 | 99.00th=[ 210], 99.50th=[ 265], 99.90th=[ 627], 99.95th=[ 635], 00:32:07.795 | 99.99th=[ 914] 00:32:07.795 bw ( KiB/s): min= 9536, max= 9536, per=29.29%, avg=9536.00, stdev= 0.00, samples=1 00:32:07.795 iops : min= 2384, max= 2384, avg=2384.00, stdev= 0.00, samples=1 00:32:07.795 lat (usec) : 250=84.03%, 500=15.82%, 750=0.13%, 1000=0.02% 00:32:07.795 cpu : usr=3.90%, sys=7.50%, ctx=4574, majf=0, minf=1 00:32:07.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.795 issued rwts: total=2048,2523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:07.795 job2: (groupid=0, jobs=1): err= 0: pid=2491468: Tue Nov 19 11:43:21 2024 00:32:07.795 read: IOPS=2050, BW=8204KiB/s (8401kB/s)(8212KiB/1001msec) 00:32:07.795 slat (nsec): min=7306, max=38499, avg=8805.36, stdev=1533.85 00:32:07.795 clat (usec): min=188, max=1401, avg=239.62, stdev=42.12 00:32:07.795 lat (usec): min=207, max=1409, avg=248.43, stdev=42.20 00:32:07.795 clat percentiles (usec): 00:32:07.795 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:32:07.795 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:32:07.795 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 260], 95.00th=[ 285], 00:32:07.795 | 99.00th=[ 437], 99.50th=[ 445], 99.90th=[ 498], 99.95th=[ 519], 00:32:07.795 | 99.99th=[ 1401] 00:32:07.795 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:07.795 slat (nsec): min=10058, max=62642, avg=11734.25, stdev=1959.67 00:32:07.795 clat (usec): min=139, max=318, avg=174.12, stdev=13.68 00:32:07.795 lat (usec): min=152, max=381, avg=185.86, stdev=13.99 00:32:07.795 clat percentiles (usec): 00:32:07.795 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:32:07.795 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:32:07.795 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 200], 00:32:07.795 | 99.00th=[ 221], 99.50th=[ 225], 99.90th=[ 277], 99.95th=[ 281], 00:32:07.795 | 99.99th=[ 318] 00:32:07.795 bw ( KiB/s): min=10312, max=10312, per=31.68%, avg=10312.00, stdev= 0.00, samples=1 00:32:07.795 iops : min= 2578, max= 2578, avg=2578.00, stdev= 0.00, samples=1 00:32:07.795 lat (usec) : 250=92.65%, 500=7.31%, 750=0.02% 00:32:07.795 lat (msec) : 2=0.02% 00:32:07.795 cpu : usr=4.50%, sys=6.90%, ctx=4614, majf=0, minf=1 00:32:07.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.795 issued rwts: total=2053,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:07.795 job3: (groupid=0, jobs=1): err= 0: pid=2491469: Tue Nov 19 11:43:21 2024 00:32:07.795 read: IOPS=2088, BW=8356KiB/s (8556kB/s)(8364KiB/1001msec) 00:32:07.795 slat (nsec): min=7386, max=25482, avg=8478.83, stdev=1108.16 00:32:07.795 clat (usec): min=196, max=924, avg=236.54, stdev=27.97 00:32:07.795 lat (usec): min=204, max=932, avg=245.02, stdev=27.99 00:32:07.795 clat percentiles (usec): 00:32:07.795 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 217], 00:32:07.795 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:32:07.795 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 289], 00:32:07.795 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 404], 99.95th=[ 510], 00:32:07.795 | 99.99th=[ 922] 00:32:07.795 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:07.795 slat (nsec): min=10614, max=37877, avg=11969.24, stdev=1512.94 00:32:07.795 clat (usec): min=142, max=358, avg=172.95, stdev=15.36 00:32:07.795 lat (usec): min=153, max=369, avg=184.92, stdev=15.58 00:32:07.795 clat percentiles (usec): 00:32:07.795 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:32:07.795 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:32:07.795 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:32:07.795 | 99.00th=[ 221], 99.50th=[ 233], 99.90th=[ 322], 99.95th=[ 343], 00:32:07.795 | 99.99th=[ 359] 00:32:07.795 bw ( KiB/s): min= 9528, max= 9528, per=29.27%, avg=9528.00, stdev= 0.00, samples=1 00:32:07.795 iops : min= 2382, max= 2382, avg=2382.00, stdev= 0.00, samples=1 00:32:07.795 lat (usec) : 250=91.46%, 500=8.49%, 750=0.02%, 1000=0.02% 00:32:07.795 cpu : usr=4.20%, sys=7.30%, ctx=4653, majf=0, minf=1 00:32:07.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.795 issued rwts: total=2091,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:07.795 00:32:07.795 Run status group 0 (all jobs): 00:32:07.795 READ: bw=24.2MiB/s (25.4MB/s), 87.8KiB/s-8356KiB/s (89.9kB/s-8556kB/s), io=24.3MiB (25.5MB), run=1001-1002msec 00:32:07.795 WRITE: bw=31.8MiB/s (33.3MB/s), 2044KiB/s-9.99MiB/s (2093kB/s-10.5MB/s), io=31.9MiB (33.4MB), run=1001-1002msec 00:32:07.795 00:32:07.795 Disk stats (read/write): 00:32:07.795 nvme0n1: ios=68/512, merge=0/0, ticks=763/86, in_queue=849, util=86.87% 00:32:07.795 nvme0n2: ios=1863/2048, merge=0/0, ticks=1309/312, in_queue=1621, util=89.96% 00:32:07.795 nvme0n3: ios=1909/2048, merge=0/0, ticks=500/341, in_queue=841, util=94.69% 00:32:07.795 nvme0n4: ios=1898/2048, merge=0/0, ticks=1328/330, in_queue=1658, util=94.23% 00:32:07.795 11:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:07.795 [global] 00:32:07.795 thread=1 00:32:07.795 invalidate=1 00:32:07.795 rw=write 00:32:07.795 time_based=1 00:32:07.795 runtime=1 00:32:07.795 ioengine=libaio 00:32:07.795 direct=1 00:32:07.795 bs=4096 00:32:07.795 iodepth=128 00:32:07.795 norandommap=0 00:32:07.795 numjobs=1 00:32:07.795 00:32:07.795 verify_dump=1 00:32:07.795 verify_backlog=512 00:32:07.795 verify_state_save=0 00:32:07.795 do_verify=1 00:32:07.795 verify=crc32c-intel 00:32:07.795 [job0] 00:32:07.795 filename=/dev/nvme0n1 00:32:07.795 [job1] 00:32:07.795 filename=/dev/nvme0n2 00:32:07.795 [job2] 00:32:07.795 filename=/dev/nvme0n3 00:32:07.795 [job3] 00:32:07.795 filename=/dev/nvme0n4 00:32:07.795 Could not set queue depth (nvme0n1) 00:32:07.795 Could not set queue depth (nvme0n2) 00:32:07.795 Could not set queue depth (nvme0n3) 00:32:07.795 Could not set queue depth (nvme0n4) 00:32:08.053 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.053 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.053 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.053 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.053 fio-3.35 00:32:08.053 Starting 4 threads 00:32:09.428 00:32:09.428 job0: (groupid=0, jobs=1): err= 0: pid=2491836: Tue Nov 19 11:43:22 2024 00:32:09.428 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:32:09.428 slat (nsec): min=1274, max=14913k, avg=130880.86, stdev=978100.56 00:32:09.428 clat (usec): min=4594, max=66835, avg=15969.80, stdev=6425.13 00:32:09.428 lat (usec): min=4611, max=66852, avg=16100.68, stdev=6509.35 00:32:09.428 clat percentiles (usec): 00:32:09.428 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10814], 20.00th=[11994], 00:32:09.428 | 30.00th=[13173], 40.00th=[13960], 50.00th=[14353], 60.00th=[15270], 00:32:09.428 | 70.00th=[16712], 80.00th=[18744], 90.00th=[20841], 95.00th=[24773], 00:32:09.428 | 99.00th=[53216], 99.50th=[60031], 99.90th=[66847], 99.95th=[66847], 00:32:09.428 | 99.99th=[66847] 00:32:09.428 write: IOPS=3763, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1010msec); 0 zone resets 00:32:09.428 slat (usec): min=2, max=12258, avg=134.59, stdev=836.14 00:32:09.428 clat (usec): min=1780, max=66844, avg=18630.94, stdev=13358.07 00:32:09.428 lat (usec): min=1793, max=66854, avg=18765.53, stdev=13451.73 00:32:09.428 clat percentiles (usec): 00:32:09.428 | 1.00th=[ 4424], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10552], 00:32:09.428 | 30.00th=[11338], 40.00th=[12518], 50.00th=[13829], 60.00th=[15270], 00:32:09.428 | 70.00th=[18482], 80.00th=[21890], 90.00th=[47973], 95.00th=[54264], 00:32:09.428 | 99.00th=[57934], 99.50th=[59507], 99.90th=[61080], 99.95th=[66847], 00:32:09.428 | 99.99th=[66847] 00:32:09.428 bw ( KiB/s): min=12776, max=16616, per=21.29%, avg=14696.00, stdev=2715.29, samples=2 00:32:09.428 iops : min= 3194, max= 4154, avg=3674.00, stdev=678.82, samples=2 00:32:09.428 lat (msec) : 2=0.04%, 4=0.47%, 10=9.49%, 20=70.20%, 50=14.31% 00:32:09.428 lat (msec) : 100=5.48% 00:32:09.428 cpu : usr=3.37%, sys=4.46%, ctx=259, majf=0, minf=2 00:32:09.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:32:09.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.428 issued rwts: total=3584,3801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.428 job1: (groupid=0, jobs=1): err= 0: pid=2491837: Tue Nov 19 11:43:22 2024 00:32:09.428 read: IOPS=6490, BW=25.4MiB/s (26.6MB/s)(25.5MiB/1005msec) 00:32:09.428 slat (nsec): min=1253, max=9036.8k, avg=78270.32, stdev=636099.63 00:32:09.428 clat (usec): min=3082, max=24499, avg=10295.71, stdev=2655.98 00:32:09.428 lat (usec): min=3091, max=24539, avg=10373.98, stdev=2708.13 00:32:09.428 clat percentiles (usec): 00:32:09.428 | 1.00th=[ 4817], 5.00th=[ 6587], 10.00th=[ 7701], 20.00th=[ 8717], 00:32:09.428 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:32:09.428 | 70.00th=[10290], 80.00th=[11994], 90.00th=[14484], 95.00th=[16057], 00:32:09.428 | 99.00th=[17957], 99.50th=[18220], 99.90th=[22938], 99.95th=[24511], 00:32:09.428 | 99.99th=[24511] 00:32:09.428 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:32:09.428 slat (usec): min=2, max=8454, avg=67.45, stdev=503.21 00:32:09.428 clat (usec): min=1647, max=18656, avg=9052.49, stdev=2273.30 00:32:09.428 lat (usec): min=1660, max=18660, avg=9119.94, stdev=2305.96 00:32:09.428 clat percentiles (usec): 00:32:09.428 | 1.00th=[ 3654], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 7242], 00:32:09.428 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9503], 00:32:09.428 | 70.00th=[10028], 80.00th=[10421], 90.00th=[12518], 95.00th=[13304], 00:32:09.428 | 99.00th=[15139], 99.50th=[16319], 99.90th=[18220], 99.95th=[18220], 00:32:09.428 | 99.99th=[18744] 00:32:09.428 bw ( KiB/s): min=24584, max=28664, per=38.57%, avg=26624.00, stdev=2885.00, samples=2 00:32:09.428 iops : min= 6146, max= 7166, avg=6656.00, stdev=721.25, samples=2 00:32:09.428 lat (msec) : 2=0.04%, 4=0.68%, 10=63.68%, 20=35.50%, 50=0.10% 00:32:09.428 cpu : usr=5.18%, sys=7.97%, ctx=446, majf=0, minf=1 00:32:09.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:32:09.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.428 issued rwts: total=6523,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.428 job2: (groupid=0, jobs=1): err= 0: pid=2491839: Tue Nov 19 11:43:22 2024 00:32:09.428 read: IOPS=4692, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1009msec) 00:32:09.428 slat (nsec): min=1489, max=10626k, avg=92597.59, stdev=757764.75 00:32:09.428 clat (usec): min=1101, max=22305, avg=11786.16, stdev=2754.15 00:32:09.428 lat (usec): min=7193, max=22314, avg=11878.76, stdev=2825.22 00:32:09.428 clat percentiles (usec): 00:32:09.428 | 1.00th=[ 7570], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10159], 00:32:09.428 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:32:09.428 | 70.00th=[11731], 80.00th=[12911], 90.00th=[16188], 95.00th=[18744], 00:32:09.428 | 99.00th=[20579], 99.50th=[20841], 99.90th=[22152], 99.95th=[22414], 00:32:09.428 | 99.99th=[22414] 00:32:09.428 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:32:09.428 slat (usec): min=2, max=45994, avg=104.86, stdev=1063.06 00:32:09.428 clat (usec): min=1909, max=128354, avg=12013.52, stdev=7543.02 00:32:09.428 lat (msec): min=3, max=128, avg=12.12, stdev= 7.72 00:32:09.428 clat percentiles (msec): 00:32:09.428 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:32:09.428 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:32:09.428 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 16], 95.00th=[ 22], 00:32:09.428 | 99.00th=[ 43], 99.50th=[ 61], 99.90th=[ 129], 99.95th=[ 129], 00:32:09.429 | 99.99th=[ 129] 00:32:09.429 bw ( KiB/s): min=16384, max=24568, per=29.66%, avg=20476.00, stdev=5786.96, samples=2 00:32:09.429 iops : min= 4096, max= 6142, avg=5119.00, stdev=1446.74, samples=2 00:32:09.429 lat (msec) : 2=0.02%, 4=0.06%, 10=24.34%, 20=72.15%, 50=3.11% 00:32:09.429 lat (msec) : 100=0.24%, 250=0.08% 00:32:09.429 cpu : usr=4.96%, sys=5.75%, ctx=342, majf=0, minf=1 00:32:09.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:09.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.429 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.429 job3: (groupid=0, jobs=1): err= 0: pid=2491841: Tue Nov 19 11:43:22 2024 00:32:09.429 read: IOPS=2381, BW=9526KiB/s (9755kB/s)(9.78MiB/1051msec) 00:32:09.429 slat (nsec): min=1619, max=23423k, avg=226170.39, stdev=1497294.16 00:32:09.429 clat (usec): min=10211, max=72733, avg=29630.51, stdev=14995.59 00:32:09.429 lat (usec): min=10215, max=72738, avg=29856.68, stdev=15062.25 00:32:09.429 clat percentiles (usec): 00:32:09.429 | 1.00th=[12518], 5.00th=[13960], 10.00th=[14222], 20.00th=[15008], 00:32:09.429 | 30.00th=[16319], 40.00th=[22152], 50.00th=[27657], 60.00th=[32113], 00:32:09.429 | 70.00th=[36963], 80.00th=[41157], 90.00th=[53216], 95.00th=[56361], 00:32:09.429 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:32:09.429 | 99.99th=[72877] 00:32:09.429 write: IOPS=2435, BW=9743KiB/s (9977kB/s)(10.0MiB/1051msec); 0 zone resets 00:32:09.429 slat (usec): min=2, max=24821, avg=164.71, stdev=1244.30 00:32:09.429 clat (usec): min=5897, max=60727, avg=22981.82, stdev=9509.00 00:32:09.429 lat (usec): min=5911, max=60759, avg=23146.54, stdev=9627.22 00:32:09.429 clat percentiles (usec): 00:32:09.429 | 1.00th=[ 9765], 5.00th=[13435], 10.00th=[13566], 20.00th=[13829], 00:32:09.429 | 30.00th=[14484], 40.00th=[16188], 50.00th=[22152], 60.00th=[23725], 00:32:09.429 | 70.00th=[27657], 80.00th=[31851], 90.00th=[35914], 95.00th=[40633], 00:32:09.429 | 99.00th=[46924], 99.50th=[46924], 99.90th=[55313], 99.95th=[55837], 00:32:09.429 | 99.99th=[60556] 00:32:09.429 bw ( KiB/s): min= 8192, max=12288, per=14.83%, avg=10240.00, stdev=2896.31, samples=2 00:32:09.429 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:32:09.429 lat (msec) : 10=0.51%, 20=40.55%, 50=53.21%, 100=5.73% 00:32:09.429 cpu : usr=2.86%, sys=2.76%, ctx=178, majf=0, minf=1 00:32:09.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:09.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.429 issued rwts: total=2503,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.429 00:32:09.429 Run status group 0 (all jobs): 00:32:09.429 READ: bw=64.5MiB/s (67.6MB/s), 9526KiB/s-25.4MiB/s (9755kB/s-26.6MB/s), io=67.8MiB (71.0MB), run=1005-1051msec 00:32:09.429 WRITE: bw=67.4MiB/s (70.7MB/s), 9743KiB/s-25.9MiB/s (9977kB/s-27.1MB/s), io=70.8MiB (74.3MB), run=1005-1051msec 00:32:09.429 00:32:09.429 Disk stats (read/write): 00:32:09.429 nvme0n1: ios=2958/3072, merge=0/0, ticks=46060/58876, in_queue=104936, util=87.07% 00:32:09.429 nvme0n2: ios=5642/5632, merge=0/0, ticks=56129/48647, in_queue=104776, util=100.00% 00:32:09.429 nvme0n3: ios=4067/4096, merge=0/0, ticks=46215/41075, in_queue=87290, util=100.00% 00:32:09.429 nvme0n4: ios=2104/2451, merge=0/0, ticks=27121/25393, in_queue=52514, util=90.99% 00:32:09.429 11:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:09.429 [global] 00:32:09.429 thread=1 00:32:09.429 invalidate=1 00:32:09.429 rw=randwrite 00:32:09.429 time_based=1 00:32:09.429 runtime=1 00:32:09.429 ioengine=libaio 00:32:09.429 direct=1 00:32:09.429 bs=4096 00:32:09.429 iodepth=128 00:32:09.429 norandommap=0 00:32:09.429 numjobs=1 00:32:09.429 00:32:09.429 verify_dump=1 00:32:09.429 verify_backlog=512 00:32:09.429 verify_state_save=0 00:32:09.429 do_verify=1 00:32:09.429 verify=crc32c-intel 00:32:09.429 [job0] 00:32:09.429 filename=/dev/nvme0n1 00:32:09.429 [job1] 00:32:09.429 filename=/dev/nvme0n2 00:32:09.429 [job2] 00:32:09.429 filename=/dev/nvme0n3 00:32:09.429 [job3] 00:32:09.429 filename=/dev/nvme0n4 00:32:09.429 Could not set queue depth (nvme0n1) 00:32:09.429 Could not set queue depth (nvme0n2) 00:32:09.429 Could not set queue depth (nvme0n3) 00:32:09.429 Could not set queue depth (nvme0n4) 00:32:09.687 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:09.687 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:09.687 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:09.687 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:09.687 fio-3.35 00:32:09.687 Starting 4 threads 00:32:11.083 00:32:11.083 job0: (groupid=0, jobs=1): err= 0: pid=2492214: Tue Nov 19 11:43:24 2024 00:32:11.083 read: IOPS=4838, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1005msec) 00:32:11.083 slat (nsec): min=1331, max=14733k, avg=99287.61, stdev=805924.35 00:32:11.083 clat (usec): min=1345, max=32088, avg=12231.97, stdev=3750.46 00:32:11.083 lat (usec): min=4389, max=32118, avg=12331.26, stdev=3820.06 00:32:11.083 clat percentiles (usec): 00:32:11.083 | 1.00th=[ 6194], 5.00th=[ 7570], 10.00th=[ 8356], 20.00th=[ 9241], 00:32:11.083 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11338], 60.00th=[12125], 00:32:11.083 | 70.00th=[13829], 80.00th=[15401], 90.00th=[17433], 95.00th=[18744], 00:32:11.083 | 99.00th=[23725], 99.50th=[26084], 99.90th=[29754], 99.95th=[29754], 00:32:11.083 | 99.99th=[32113] 00:32:11.083 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:32:11.083 slat (usec): min=2, max=9361, avg=95.63, stdev=548.58 00:32:11.083 clat (usec): min=1515, max=67022, avg=13195.60, stdev=10944.73 00:32:11.083 lat (usec): min=1528, max=67034, avg=13291.23, stdev=11024.27 00:32:11.083 clat percentiles (usec): 00:32:11.083 | 1.00th=[ 3949], 5.00th=[ 6194], 10.00th=[ 7177], 20.00th=[ 8979], 00:32:11.083 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10421], 60.00th=[10683], 00:32:11.083 | 70.00th=[11469], 80.00th=[12256], 90.00th=[18744], 95.00th=[40109], 00:32:11.083 | 99.00th=[66323], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:32:11.083 | 99.99th=[66847] 00:32:11.083 bw ( KiB/s): min=16384, max=24576, per=28.05%, avg=20480.00, stdev=5792.62, samples=2 00:32:11.083 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:32:11.083 lat (msec) : 2=0.04%, 4=0.49%, 10=34.10%, 20=59.31%, 50=4.34% 00:32:11.083 lat (msec) : 100=1.72% 00:32:11.083 cpu : usr=3.69%, sys=5.38%, ctx=502, majf=0, minf=1 00:32:11.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:11.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.083 issued rwts: total=4863,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.083 job1: (groupid=0, jobs=1): err= 0: pid=2492215: Tue Nov 19 11:43:24 2024 00:32:11.083 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:32:11.083 slat (nsec): min=1327, max=11261k, avg=93067.94, stdev=738423.61 00:32:11.083 clat (usec): min=4312, max=28985, avg=11821.68, stdev=3132.59 00:32:11.083 lat (usec): min=4318, max=29011, avg=11914.74, stdev=3192.38 00:32:11.083 clat percentiles (usec): 00:32:11.083 | 1.00th=[ 5997], 5.00th=[ 7242], 10.00th=[ 8979], 20.00th=[ 9765], 00:32:11.083 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11338], 60.00th=[11994], 00:32:11.083 | 70.00th=[12518], 80.00th=[13304], 90.00th=[15926], 95.00th=[18744], 00:32:11.083 | 99.00th=[22676], 99.50th=[22938], 99.90th=[22938], 99.95th=[22938], 00:32:11.083 | 99.99th=[28967] 00:32:11.083 write: IOPS=5353, BW=20.9MiB/s (21.9MB/s)(21.1MiB/1007msec); 0 zone resets 00:32:11.083 slat (usec): min=2, max=16842, avg=89.66, stdev=662.78 00:32:11.083 clat (usec): min=2631, max=41104, avg=12371.57, stdev=5731.09 00:32:11.083 lat (usec): min=2641, max=41108, avg=12461.24, stdev=5770.28 00:32:11.083 clat percentiles (usec): 00:32:11.083 | 1.00th=[ 4817], 5.00th=[ 6128], 10.00th=[ 7308], 20.00th=[ 8848], 00:32:11.083 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10814], 60.00th=[11600], 00:32:11.083 | 70.00th=[11994], 80.00th=[14615], 90.00th=[20317], 95.00th=[26084], 00:32:11.083 | 99.00th=[33817], 99.50th=[37487], 99.90th=[41157], 99.95th=[41157], 00:32:11.083 | 99.99th=[41157] 00:32:11.083 bw ( KiB/s): min=17536, max=24568, per=28.84%, avg=21052.00, stdev=4972.37, samples=2 00:32:11.083 iops : min= 4384, max= 6142, avg=5263.00, stdev=1243.09, samples=2 00:32:11.083 lat (msec) : 4=0.26%, 10=28.17%, 20=64.88%, 50=6.69% 00:32:11.083 cpu : usr=4.57%, sys=5.96%, ctx=396, majf=0, minf=1 00:32:11.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:11.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.083 issued rwts: total=5120,5391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.083 job2: (groupid=0, jobs=1): err= 0: pid=2492216: Tue Nov 19 11:43:24 2024 00:32:11.083 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:32:11.083 slat (nsec): min=1706, max=21511k, avg=118175.25, stdev=1001122.39 00:32:11.083 clat (usec): min=6039, max=41738, avg=15635.80, stdev=6224.34 00:32:11.083 lat (usec): min=6050, max=41767, avg=15753.97, stdev=6296.23 00:32:11.083 clat percentiles (usec): 00:32:11.083 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:32:11.083 | 30.00th=[11600], 40.00th=[12256], 50.00th=[13829], 60.00th=[14877], 00:32:11.083 | 70.00th=[18220], 80.00th=[20579], 90.00th=[25560], 95.00th=[28443], 00:32:11.083 | 99.00th=[35914], 99.50th=[35914], 99.90th=[39584], 99.95th=[39584], 00:32:11.083 | 99.99th=[41681] 00:32:11.083 write: IOPS=3942, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1003msec); 0 zone resets 00:32:11.083 slat (usec): min=2, max=16726, avg=139.15, stdev=1057.45 00:32:11.083 clat (usec): min=1622, max=138650, avg=18003.39, stdev=17998.36 00:32:11.083 lat (usec): min=1637, max=138661, avg=18142.54, stdev=18119.33 00:32:11.083 clat percentiles (msec): 00:32:11.083 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:32:11.083 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 14], 00:32:11.083 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 24], 95.00th=[ 55], 00:32:11.083 | 99.00th=[ 107], 99.50th=[ 129], 99.90th=[ 140], 99.95th=[ 140], 00:32:11.083 | 99.99th=[ 140] 00:32:11.083 bw ( KiB/s): min=10288, max=20328, per=20.97%, avg=15308.00, stdev=7099.35, samples=2 00:32:11.083 iops : min= 2572, max= 5082, avg=3827.00, stdev=1774.84, samples=2 00:32:11.083 lat (msec) : 2=0.03%, 4=0.01%, 10=18.25%, 20=60.75%, 50=18.32% 00:32:11.083 lat (msec) : 100=1.91%, 250=0.73% 00:32:11.083 cpu : usr=3.99%, sys=4.59%, ctx=194, majf=0, minf=1 00:32:11.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:11.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.083 issued rwts: total=3584,3954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.083 job3: (groupid=0, jobs=1): err= 0: pid=2492217: Tue Nov 19 11:43:24 2024 00:32:11.083 read: IOPS=4405, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1045msec) 00:32:11.083 slat (nsec): min=1289, max=19654k, avg=99490.60, stdev=791955.46 00:32:11.083 clat (usec): min=3441, max=53137, avg=14771.15, stdev=7915.48 00:32:11.083 lat (usec): min=3453, max=53142, avg=14870.64, stdev=7947.31 00:32:11.083 clat percentiles (usec): 00:32:11.083 | 1.00th=[ 3720], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10290], 00:32:11.083 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12649], 60.00th=[13042], 00:32:11.083 | 70.00th=[14484], 80.00th=[17171], 90.00th=[23462], 95.00th=[32375], 00:32:11.083 | 99.00th=[48497], 99.50th=[52167], 99.90th=[52691], 99.95th=[53216], 00:32:11.083 | 99.99th=[53216] 00:32:11.083 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:32:11.083 slat (nsec): min=1884, max=28079k, avg=101956.03, stdev=889241.88 00:32:11.083 clat (usec): min=1225, max=87132, avg=14008.95, stdev=7589.62 00:32:11.083 lat (usec): min=1233, max=87141, avg=14110.91, stdev=7622.01 00:32:11.083 clat percentiles (usec): 00:32:11.083 | 1.00th=[ 5407], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[10552], 00:32:11.083 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:32:11.083 | 70.00th=[13698], 80.00th=[16909], 90.00th=[21365], 95.00th=[22676], 00:32:11.083 | 99.00th=[50070], 99.50th=[73925], 99.90th=[84411], 99.95th=[84411], 00:32:11.083 | 99.99th=[87557] 00:32:11.083 bw ( KiB/s): min=16384, max=20480, per=25.25%, avg=18432.00, stdev=2896.31, samples=2 00:32:11.083 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:32:11.083 lat (msec) : 2=0.09%, 4=0.56%, 10=14.48%, 20=71.02%, 50=12.91% 00:32:11.083 lat (msec) : 100=0.94% 00:32:11.083 cpu : usr=3.26%, sys=4.60%, ctx=292, majf=0, minf=1 00:32:11.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:11.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.083 issued rwts: total=4604,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.083 00:32:11.083 Run status group 0 (all jobs): 00:32:11.083 READ: bw=67.9MiB/s (71.2MB/s), 14.0MiB/s-19.9MiB/s (14.6MB/s-20.8MB/s), io=71.0MiB (74.4MB), run=1003-1045msec 00:32:11.083 WRITE: bw=71.3MiB/s (74.8MB/s), 15.4MiB/s-20.9MiB/s (16.1MB/s-21.9MB/s), io=74.5MiB (78.1MB), run=1003-1045msec 00:32:11.083 00:32:11.084 Disk stats (read/write): 00:32:11.084 nvme0n1: ios=4658/4727, merge=0/0, ticks=53999/48502, in_queue=102501, util=85.37% 00:32:11.084 nvme0n2: ios=4598/4608, merge=0/0, ticks=49270/48975, in_queue=98245, util=89.54% 00:32:11.084 nvme0n3: ios=2771/3072, merge=0/0, ticks=45019/59788, in_queue=104807, util=94.80% 00:32:11.084 nvme0n4: ios=3641/3946, merge=0/0, ticks=38765/43153, in_queue=81918, util=95.39% 00:32:11.084 11:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:11.084 11:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2492445 00:32:11.084 11:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:11.084 11:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:11.084 [global] 00:32:11.084 thread=1 00:32:11.084 invalidate=1 00:32:11.084 rw=read 00:32:11.084 time_based=1 00:32:11.084 runtime=10 00:32:11.084 ioengine=libaio 00:32:11.084 direct=1 00:32:11.084 bs=4096 00:32:11.084 iodepth=1 00:32:11.084 norandommap=1 00:32:11.084 numjobs=1 00:32:11.084 00:32:11.084 [job0] 00:32:11.084 filename=/dev/nvme0n1 00:32:11.084 [job1] 00:32:11.084 filename=/dev/nvme0n2 00:32:11.084 [job2] 00:32:11.084 filename=/dev/nvme0n3 00:32:11.084 [job3] 00:32:11.084 filename=/dev/nvme0n4 00:32:11.084 Could not set queue depth (nvme0n1) 00:32:11.084 Could not set queue depth (nvme0n2) 00:32:11.084 Could not set queue depth (nvme0n3) 00:32:11.084 Could not set queue depth (nvme0n4) 00:32:11.346 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.346 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.346 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.346 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.346 fio-3.35 00:32:11.346 Starting 4 threads 00:32:13.872 11:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:14.130 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43405312, buflen=4096 00:32:14.130 fio: pid=2492590, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:14.130 11:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:14.388 11:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:14.388 11:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:14.388 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1064960, buflen=4096 00:32:14.388 fio: pid=2492589, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:14.646 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47906816, buflen=4096 00:32:14.646 fio: pid=2492587, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:14.646 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:14.646 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:14.646 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:14.646 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:14.646 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9064448, buflen=4096 00:32:14.646 fio: pid=2492588, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:14.904 00:32:14.904 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2492587: Tue Nov 19 11:43:28 2024 00:32:14.904 read: IOPS=3745, BW=14.6MiB/s (15.3MB/s)(45.7MiB/3123msec) 00:32:14.904 slat (usec): min=6, max=10659, avg=10.56, stdev=144.75 00:32:14.904 clat (usec): min=179, max=1638, avg=252.74, stdev=35.64 00:32:14.904 lat (usec): min=187, max=10969, avg=263.30, stdev=150.99 00:32:14.904 clat percentiles (usec): 00:32:14.904 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 231], 20.00th=[ 239], 00:32:14.904 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:32:14.904 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 306], 00:32:14.904 | 99.00th=[ 343], 99.50th=[ 367], 99.90th=[ 433], 99.95th=[ 562], 00:32:14.904 | 99.99th=[ 1532] 00:32:14.904 bw ( KiB/s): min=14328, max=15656, per=51.08%, avg=15078.67, stdev=558.27, samples=6 00:32:14.904 iops : min= 3582, max= 3914, avg=3769.67, stdev=139.57, samples=6 00:32:14.904 lat (usec) : 250=58.82%, 500=41.12%, 750=0.02%, 1000=0.01% 00:32:14.904 lat (msec) : 2=0.03% 00:32:14.904 cpu : usr=1.63%, sys=6.31%, ctx=11701, majf=0, minf=2 00:32:14.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.904 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.904 issued rwts: total=11697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:14.904 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2492588: Tue Nov 19 11:43:28 2024 00:32:14.904 read: IOPS=659, BW=2638KiB/s (2701kB/s)(8852KiB/3356msec) 00:32:14.904 slat (usec): min=8, max=29767, avg=22.84, stdev=632.45 00:32:14.904 clat (usec): min=195, max=42033, avg=1479.63, stdev=7048.01 00:32:14.904 lat (usec): min=204, max=70982, avg=1502.48, stdev=7152.62 00:32:14.904 clat percentiles (usec): 00:32:14.904 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 215], 00:32:14.904 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 223], 00:32:14.904 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 285], 00:32:14.904 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:14.904 | 99.99th=[42206] 00:32:14.904 bw ( KiB/s): min= 93, max=16992, per=9.96%, avg=2939.50, stdev=6884.58, samples=6 00:32:14.904 iops : min= 23, max= 4248, avg=734.83, stdev=1721.17, samples=6 00:32:14.904 lat (usec) : 250=91.10%, 500=5.69%, 750=0.05% 00:32:14.904 lat (msec) : 2=0.05%, 50=3.07% 00:32:14.904 cpu : usr=0.30%, sys=1.28%, ctx=2217, majf=0, minf=2 00:32:14.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.904 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.904 issued rwts: total=2214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:14.904 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2492589: Tue Nov 19 11:43:28 2024 00:32:14.904 read: IOPS=88, BW=353KiB/s (362kB/s)(1040KiB/2944msec) 00:32:14.904 slat (nsec): min=6753, max=40931, avg=13563.95, stdev=6509.13 00:32:14.904 clat (usec): min=214, max=41132, avg=11223.10, stdev=18082.04 00:32:14.904 lat (usec): min=221, max=41143, avg=11236.61, stdev=18086.86 00:32:14.904 clat percentiles (usec): 00:32:14.904 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:32:14.904 | 30.00th=[ 247], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:32:14.904 | 70.00th=[ 367], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:14.904 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:14.904 | 99.99th=[41157] 00:32:14.904 bw ( KiB/s): min= 96, max= 1112, per=1.35%, avg=398.40, stdev=450.90, samples=5 00:32:14.904 iops : min= 24, max= 278, avg=99.60, stdev=112.72, samples=5 00:32:14.904 lat (usec) : 250=31.42%, 500=41.00%, 1000=0.38% 00:32:14.904 lat (msec) : 50=26.82% 00:32:14.904 cpu : usr=0.00%, sys=0.27%, ctx=262, majf=0, minf=1 00:32:14.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.905 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.905 issued rwts: total=261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:14.905 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2492590: Tue Nov 19 11:43:28 2024 00:32:14.905 read: IOPS=3903, BW=15.2MiB/s (16.0MB/s)(41.4MiB/2715msec) 00:32:14.905 slat (nsec): min=7228, max=47136, avg=8477.44, stdev=1548.05 00:32:14.905 clat (usec): min=173, max=1911, avg=243.77, stdev=37.63 00:32:14.905 lat (usec): min=180, max=1919, avg=252.24, stdev=37.72 00:32:14.905 clat percentiles (usec): 00:32:14.905 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 235], 00:32:14.905 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:32:14.905 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 281], 00:32:14.905 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 545], 99.95th=[ 840], 00:32:14.905 | 99.99th=[ 1663] 00:32:14.905 bw ( KiB/s): min=14712, max=16608, per=53.34%, avg=15744.00, stdev=677.34, samples=5 00:32:14.905 iops : min= 3678, max= 4152, avg=3936.00, stdev=169.33, samples=5 00:32:14.905 lat (usec) : 250=74.10%, 500=25.79%, 750=0.03%, 1000=0.03% 00:32:14.905 lat (msec) : 2=0.05% 00:32:14.905 cpu : usr=2.06%, sys=6.48%, ctx=10599, majf=0, minf=2 00:32:14.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.905 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.905 issued rwts: total=10598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:14.905 00:32:14.905 Run status group 0 (all jobs): 00:32:14.905 READ: bw=28.8MiB/s (30.2MB/s), 353KiB/s-15.2MiB/s (362kB/s-16.0MB/s), io=96.7MiB (101MB), run=2715-3356msec 00:32:14.905 00:32:14.905 Disk stats (read/write): 00:32:14.905 nvme0n1: ios=11696/0, merge=0/0, ticks=2831/0, in_queue=2831, util=94.85% 00:32:14.905 nvme0n2: ios=2214/0, merge=0/0, ticks=3256/0, in_queue=3256, util=95.12% 00:32:14.905 nvme0n3: ios=299/0, merge=0/0, ticks=3717/0, in_queue=3717, util=98.95% 00:32:14.905 nvme0n4: ios=10285/0, merge=0/0, ticks=3029/0, in_queue=3029, util=98.96% 00:32:14.905 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:14.905 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:15.163 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.163 11:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:15.420 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.420 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:15.677 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.677 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:15.677 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:15.677 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2492445 00:32:15.677 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:15.677 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:15.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:15.934 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:15.934 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:15.934 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:15.935 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:15.935 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:15.935 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:15.935 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:15.935 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:15.935 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:15.935 nvmf hotplug test: fio failed as expected 00:32:15.935 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:16.194 rmmod nvme_tcp 00:32:16.194 rmmod nvme_fabrics 00:32:16.194 rmmod nvme_keyring 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2489763 ']' 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2489763 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2489763 ']' 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2489763 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2489763 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2489763' 00:32:16.194 killing process with pid 2489763 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2489763 00:32:16.194 11:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2489763 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.454 11:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.359 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:18.359 00:32:18.359 real 0m25.944s 00:32:18.359 user 1m31.116s 00:32:18.359 sys 0m11.437s 00:32:18.359 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:18.359 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:18.359 ************************************ 00:32:18.359 END TEST nvmf_fio_target 00:32:18.359 ************************************ 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:18.619 ************************************ 00:32:18.619 START TEST nvmf_bdevio 00:32:18.619 ************************************ 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:18.619 * Looking for test storage... 00:32:18.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:18.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.619 --rc genhtml_branch_coverage=1 00:32:18.619 --rc genhtml_function_coverage=1 00:32:18.619 --rc genhtml_legend=1 00:32:18.619 --rc geninfo_all_blocks=1 00:32:18.619 --rc geninfo_unexecuted_blocks=1 00:32:18.619 00:32:18.619 ' 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:18.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.619 --rc genhtml_branch_coverage=1 00:32:18.619 --rc genhtml_function_coverage=1 00:32:18.619 --rc genhtml_legend=1 00:32:18.619 --rc geninfo_all_blocks=1 00:32:18.619 --rc geninfo_unexecuted_blocks=1 00:32:18.619 00:32:18.619 ' 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:18.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.619 --rc genhtml_branch_coverage=1 00:32:18.619 --rc genhtml_function_coverage=1 00:32:18.619 --rc genhtml_legend=1 00:32:18.619 --rc geninfo_all_blocks=1 00:32:18.619 --rc geninfo_unexecuted_blocks=1 00:32:18.619 00:32:18.619 ' 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:18.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.619 --rc genhtml_branch_coverage=1 00:32:18.619 --rc genhtml_function_coverage=1 00:32:18.619 --rc genhtml_legend=1 00:32:18.619 --rc geninfo_all_blocks=1 00:32:18.619 --rc geninfo_unexecuted_blocks=1 00:32:18.619 00:32:18.619 ' 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.619 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.879 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:18.880 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:18.880 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:18.880 11:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.447 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:25.448 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:25.448 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:25.448 Found net devices under 0000:86:00.0: cvl_0_0 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:25.448 Found net devices under 0000:86:00.1: cvl_0_1 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:25.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:25.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:32:25.448 00:32:25.448 --- 10.0.0.2 ping statistics --- 00:32:25.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.448 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:25.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:25.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:32:25.448 00:32:25.448 --- 10.0.0.1 ping statistics --- 00:32:25.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.448 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.448 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2496830 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2496830 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2496830 ']' 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 [2024-11-19 11:43:38.350295] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:25.449 [2024-11-19 11:43:38.351253] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:32:25.449 [2024-11-19 11:43:38.351287] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.449 [2024-11-19 11:43:38.429888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:25.449 [2024-11-19 11:43:38.471437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.449 [2024-11-19 11:43:38.471473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.449 [2024-11-19 11:43:38.471481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.449 [2024-11-19 11:43:38.471487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.449 [2024-11-19 11:43:38.471492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.449 [2024-11-19 11:43:38.473108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:25.449 [2024-11-19 11:43:38.473219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:25.449 [2024-11-19 11:43:38.473327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:25.449 [2024-11-19 11:43:38.473326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:25.449 [2024-11-19 11:43:38.539103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:25.449 [2024-11-19 11:43:38.539686] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:25.449 [2024-11-19 11:43:38.540048] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:25.449 [2024-11-19 11:43:38.540458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:25.449 [2024-11-19 11:43:38.540491] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 [2024-11-19 11:43:38.606107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 Malloc0 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.449 [2024-11-19 11:43:38.690361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:25.449 { 00:32:25.449 "params": { 00:32:25.449 "name": "Nvme$subsystem", 00:32:25.449 "trtype": "$TEST_TRANSPORT", 00:32:25.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.449 "adrfam": "ipv4", 00:32:25.449 "trsvcid": "$NVMF_PORT", 00:32:25.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.449 "hdgst": ${hdgst:-false}, 00:32:25.449 "ddgst": ${ddgst:-false} 00:32:25.449 }, 00:32:25.449 "method": "bdev_nvme_attach_controller" 00:32:25.449 } 00:32:25.449 EOF 00:32:25.449 )") 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:25.449 11:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:25.449 "params": { 00:32:25.449 "name": "Nvme1", 00:32:25.449 "trtype": "tcp", 00:32:25.449 "traddr": "10.0.0.2", 00:32:25.449 "adrfam": "ipv4", 00:32:25.449 "trsvcid": "4420", 00:32:25.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:25.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:25.449 "hdgst": false, 00:32:25.449 "ddgst": false 00:32:25.449 }, 00:32:25.449 "method": "bdev_nvme_attach_controller" 00:32:25.449 }' 00:32:25.449 [2024-11-19 11:43:38.743320] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:32:25.449 [2024-11-19 11:43:38.743367] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496854 ] 00:32:25.449 [2024-11-19 11:43:38.820472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:25.449 [2024-11-19 11:43:38.864713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.449 [2024-11-19 11:43:38.864820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.449 [2024-11-19 11:43:38.864820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:25.449 I/O targets: 00:32:25.449 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:25.449 00:32:25.449 00:32:25.449 CUnit - A unit testing framework for C - Version 2.1-3 00:32:25.449 http://cunit.sourceforge.net/ 00:32:25.449 00:32:25.449 00:32:25.449 Suite: bdevio tests on: Nvme1n1 00:32:25.449 Test: blockdev write read block ...passed 00:32:25.706 Test: blockdev write zeroes read block ...passed 00:32:25.706 Test: blockdev write zeroes read no split ...passed 00:32:25.706 Test: blockdev write zeroes read split ...passed 00:32:25.706 Test: blockdev write zeroes read split partial ...passed 00:32:25.706 Test: blockdev reset ...[2024-11-19 11:43:39.287147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:25.706 [2024-11-19 11:43:39.287212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5d340 (9): Bad file descriptor 00:32:25.706 [2024-11-19 11:43:39.290639] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:25.706 passed 00:32:25.706 Test: blockdev write read 8 blocks ...passed 00:32:25.706 Test: blockdev write read size > 128k ...passed 00:32:25.706 Test: blockdev write read invalid size ...passed 00:32:25.706 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:25.706 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:25.706 Test: blockdev write read max offset ...passed 00:32:25.706 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:25.706 Test: blockdev writev readv 8 blocks ...passed 00:32:25.964 Test: blockdev writev readv 30 x 1block ...passed 00:32:25.964 Test: blockdev writev readv block ...passed 00:32:25.964 Test: blockdev writev readv size > 128k ...passed 00:32:25.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:25.964 Test: blockdev comparev and writev ...[2024-11-19 11:43:39.546904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.964 [2024-11-19 11:43:39.546933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:25.964 [2024-11-19 11:43:39.546952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.964 [2024-11-19 11:43:39.546960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:25.964 [2024-11-19 11:43:39.547262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.964 [2024-11-19 11:43:39.547274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:25.964 [2024-11-19 11:43:39.547286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.964 [2024-11-19 11:43:39.547294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:25.964 [2024-11-19 11:43:39.547572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.964 [2024-11-19 11:43:39.547584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:25.964 [2024-11-19 11:43:39.547596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.964 [2024-11-19 11:43:39.547606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:25.964 [2024-11-19 11:43:39.547894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.964 [2024-11-19 11:43:39.547906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:25.964 [2024-11-19 11:43:39.547919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:25.964 [2024-11-19 11:43:39.547927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:25.964 passed 00:32:25.964 Test: blockdev nvme passthru rw ...passed 00:32:25.964 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:43:39.631257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:25.964 [2024-11-19 11:43:39.631274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:25.964 [2024-11-19 11:43:39.631384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:25.964 [2024-11-19 11:43:39.631395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:25.964 [2024-11-19 11:43:39.631504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:25.964 [2024-11-19 11:43:39.631514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:25.964 [2024-11-19 11:43:39.631629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:25.964 [2024-11-19 11:43:39.631640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:25.964 passed 00:32:25.964 Test: blockdev nvme admin passthru ...passed 00:32:25.964 Test: blockdev copy ...passed 00:32:25.964 00:32:25.964 Run Summary: Type Total Ran Passed Failed Inactive 00:32:25.964 suites 1 1 n/a 0 0 00:32:25.964 tests 23 23 23 0 0 00:32:25.964 asserts 152 152 152 0 n/a 00:32:25.964 00:32:25.964 Elapsed time = 1.025 seconds 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:26.257 rmmod nvme_tcp 00:32:26.257 rmmod nvme_fabrics 00:32:26.257 rmmod nvme_keyring 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2496830 ']' 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2496830 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2496830 ']' 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2496830 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2496830 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2496830' 00:32:26.257 killing process with pid 2496830 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2496830 00:32:26.257 11:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2496830 00:32:26.551 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:26.551 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:26.551 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:26.551 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:26.551 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:26.551 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:26.551 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:26.552 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:26.552 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:26.552 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.552 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.552 11:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.511 11:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:28.511 00:32:28.511 real 0m10.005s 00:32:28.511 user 0m9.277s 00:32:28.511 sys 0m5.174s 00:32:28.511 11:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.511 11:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:28.511 ************************************ 00:32:28.511 END TEST nvmf_bdevio 00:32:28.511 ************************************ 00:32:28.511 11:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:28.511 00:32:28.511 real 4m33.013s 00:32:28.511 user 9m6.737s 00:32:28.511 sys 1m51.869s 00:32:28.511 11:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.511 11:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:28.511 ************************************ 00:32:28.511 END TEST nvmf_target_core_interrupt_mode 00:32:28.511 ************************************ 00:32:28.511 11:43:42 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:28.511 11:43:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:28.511 11:43:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:28.511 11:43:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:28.770 ************************************ 00:32:28.770 START TEST nvmf_interrupt 00:32:28.770 ************************************ 00:32:28.770 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:28.770 * Looking for test storage... 00:32:28.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:28.770 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:28.770 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:28.770 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:28.770 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:28.770 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:28.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.771 --rc genhtml_branch_coverage=1 00:32:28.771 --rc genhtml_function_coverage=1 00:32:28.771 --rc genhtml_legend=1 00:32:28.771 --rc geninfo_all_blocks=1 00:32:28.771 --rc geninfo_unexecuted_blocks=1 00:32:28.771 00:32:28.771 ' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:28.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.771 --rc genhtml_branch_coverage=1 00:32:28.771 --rc genhtml_function_coverage=1 00:32:28.771 --rc genhtml_legend=1 00:32:28.771 --rc geninfo_all_blocks=1 00:32:28.771 --rc geninfo_unexecuted_blocks=1 00:32:28.771 00:32:28.771 ' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:28.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.771 --rc genhtml_branch_coverage=1 00:32:28.771 --rc genhtml_function_coverage=1 00:32:28.771 --rc genhtml_legend=1 00:32:28.771 --rc geninfo_all_blocks=1 00:32:28.771 --rc geninfo_unexecuted_blocks=1 00:32:28.771 00:32:28.771 ' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:28.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.771 --rc genhtml_branch_coverage=1 00:32:28.771 --rc genhtml_function_coverage=1 00:32:28.771 --rc genhtml_legend=1 00:32:28.771 --rc geninfo_all_blocks=1 00:32:28.771 --rc geninfo_unexecuted_blocks=1 00:32:28.771 00:32:28.771 ' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:28.771 11:43:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:35.352 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:35.352 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.352 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:35.353 Found net devices under 0000:86:00.0: cvl_0_0 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:35.353 Found net devices under 0000:86:00.1: cvl_0_1 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:35.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:35.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:32:35.353 00:32:35.353 --- 10.0.0.2 ping statistics --- 00:32:35.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.353 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:35.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:35.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:32:35.353 00:32:35.353 --- 10.0.0.1 ping statistics --- 00:32:35.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.353 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2500621 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2500621 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2500621 ']' 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:35.353 [2024-11-19 11:43:48.419097] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:35.353 [2024-11-19 11:43:48.420107] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:32:35.353 [2024-11-19 11:43:48.420147] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.353 [2024-11-19 11:43:48.498415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:35.353 [2024-11-19 11:43:48.540488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.353 [2024-11-19 11:43:48.540524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.353 [2024-11-19 11:43:48.540531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.353 [2024-11-19 11:43:48.540537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.353 [2024-11-19 11:43:48.540543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.353 [2024-11-19 11:43:48.541679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.353 [2024-11-19 11:43:48.541681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.353 [2024-11-19 11:43:48.609016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:35.353 [2024-11-19 11:43:48.609598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:35.353 [2024-11-19 11:43:48.609835] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:35.353 5000+0 records in 00:32:35.353 5000+0 records out 00:32:35.353 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0170751 s, 600 MB/s 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:35.353 AIO0 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:35.353 [2024-11-19 11:43:48.726494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:35.353 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:35.354 [2024-11-19 11:43:48.766854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2500621 0 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2500621 0 idle 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2500621 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2500621 -w 256 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2500621 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0' 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2500621 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2500621 1 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2500621 1 idle 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2500621 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2500621 -w 256 00:32:35.354 11:43:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2500625 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2500625 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2500674 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2500621 0 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2500621 0 busy 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2500621 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:35.623 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2500621 -w 256 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2500621 root 20 0 128.2g 47616 34560 S 6.7 0.0 0:00.26 reactor_0' 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2500621 root 20 0 128.2g 47616 34560 S 6.7 0.0 0:00.26 reactor_0 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:35.624 11:43:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:36.557 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:36.557 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2500621 -w 256 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2500621 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.56 reactor_0' 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2500621 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.56 reactor_0 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2500621 1 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2500621 1 busy 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2500621 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2500621 -w 256 00:32:36.816 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:37.074 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2500625 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:01.33 reactor_1' 00:32:37.074 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2500625 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:01.33 reactor_1 00:32:37.074 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:37.074 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:37.074 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:32:37.074 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:32:37.074 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:37.074 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:37.074 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:37.074 11:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:37.074 11:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2500674 00:32:47.053 Initializing NVMe Controllers 00:32:47.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:47.053 Controller IO queue size 256, less than required. 00:32:47.053 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:47.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:47.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:47.053 Initialization complete. Launching workers. 00:32:47.053 ======================================================== 00:32:47.053 Latency(us) 00:32:47.053 Device Information : IOPS MiB/s Average min max 00:32:47.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16463.11 64.31 15558.13 3659.66 31164.81 00:32:47.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16609.51 64.88 15417.13 7537.38 27318.34 00:32:47.053 ======================================================== 00:32:47.053 Total : 33072.62 129.19 15487.32 3659.66 31164.81 00:32:47.053 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2500621 0 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2500621 0 idle 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2500621 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2500621 -w 256 00:32:47.053 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2500621 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0' 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2500621 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2500621 1 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2500621 1 idle 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2500621 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2500621 -w 256 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2500625 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2500625 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:47.054 11:43:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:47.054 11:44:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:47.054 11:44:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:47.054 11:44:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:47.054 11:44:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:47.054 11:44:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2500621 0 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2500621 0 idle 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2500621 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2500621 -w 256 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2500621 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.51 reactor_0' 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2500621 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.51 reactor_0 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2500621 1 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2500621 1 idle 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2500621 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2500621 -w 256 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2500625 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.10 reactor_1' 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2500625 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.10 reactor_1 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:48.959 11:44:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:49.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.219 rmmod nvme_tcp 00:32:49.219 rmmod nvme_fabrics 00:32:49.219 rmmod nvme_keyring 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2500621 ']' 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2500621 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2500621 ']' 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2500621 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2500621 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2500621' 00:32:49.219 killing process with pid 2500621 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2500621 00:32:49.219 11:44:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2500621 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:49.478 11:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.016 11:44:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:52.016 00:32:52.016 real 0m22.891s 00:32:52.016 user 0m39.845s 00:32:52.016 sys 0m8.360s 00:32:52.016 11:44:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:52.016 11:44:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:52.016 ************************************ 00:32:52.016 END TEST nvmf_interrupt 00:32:52.016 ************************************ 00:32:52.016 00:32:52.016 real 27m24.609s 00:32:52.016 user 56m32.210s 00:32:52.016 sys 9m22.092s 00:32:52.016 11:44:05 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:52.016 11:44:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:52.016 ************************************ 00:32:52.016 END TEST nvmf_tcp 00:32:52.016 ************************************ 00:32:52.016 11:44:05 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:52.016 11:44:05 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:52.016 11:44:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:52.016 11:44:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:52.016 11:44:05 -- common/autotest_common.sh@10 -- # set +x 00:32:52.016 ************************************ 00:32:52.016 START TEST spdkcli_nvmf_tcp 00:32:52.016 ************************************ 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:52.016 * Looking for test storage... 00:32:52.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:52.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.016 --rc genhtml_branch_coverage=1 00:32:52.016 --rc genhtml_function_coverage=1 00:32:52.016 --rc genhtml_legend=1 00:32:52.016 --rc geninfo_all_blocks=1 00:32:52.016 --rc geninfo_unexecuted_blocks=1 00:32:52.016 00:32:52.016 ' 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:52.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.016 --rc genhtml_branch_coverage=1 00:32:52.016 --rc genhtml_function_coverage=1 00:32:52.016 --rc genhtml_legend=1 00:32:52.016 --rc geninfo_all_blocks=1 00:32:52.016 --rc geninfo_unexecuted_blocks=1 00:32:52.016 00:32:52.016 ' 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:52.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.016 --rc genhtml_branch_coverage=1 00:32:52.016 --rc genhtml_function_coverage=1 00:32:52.016 --rc genhtml_legend=1 00:32:52.016 --rc geninfo_all_blocks=1 00:32:52.016 --rc geninfo_unexecuted_blocks=1 00:32:52.016 00:32:52.016 ' 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:52.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.016 --rc genhtml_branch_coverage=1 00:32:52.016 --rc genhtml_function_coverage=1 00:32:52.016 --rc genhtml_legend=1 00:32:52.016 --rc geninfo_all_blocks=1 00:32:52.016 --rc geninfo_unexecuted_blocks=1 00:32:52.016 00:32:52.016 ' 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:52.016 11:44:05 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:52.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2503543 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2503543 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2503543 ']' 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:52.017 [2024-11-19 11:44:05.581436] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:32:52.017 [2024-11-19 11:44:05.581485] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503543 ] 00:32:52.017 [2024-11-19 11:44:05.654639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:52.017 [2024-11-19 11:44:05.698340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.017 [2024-11-19 11:44:05.698344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:52.017 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:52.276 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:52.276 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:52.276 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:52.276 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.276 11:44:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:52.276 11:44:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:52.276 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:52.276 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:52.276 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:52.276 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:52.276 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:52.276 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:52.276 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:52.276 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:52.276 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:52.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:52.276 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:52.276 ' 00:32:54.807 [2024-11-19 11:44:08.517111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.182 [2024-11-19 11:44:09.853576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:58.712 [2024-11-19 11:44:12.329324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:01.243 [2024-11-19 11:44:14.516179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:02.620 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:02.620 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:02.620 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:02.620 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:02.620 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:02.620 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:02.620 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:02.620 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:02.620 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:02.620 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:02.620 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:02.620 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:02.620 11:44:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:02.620 11:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:02.620 11:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.620 11:44:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:02.620 11:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:02.620 11:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.620 11:44:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:02.620 11:44:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:03.188 11:44:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:03.188 11:44:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:03.188 11:44:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:03.188 11:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:03.188 11:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.188 11:44:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:03.188 11:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:03.188 11:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.188 11:44:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:03.188 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:03.188 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:03.188 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:03.188 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:03.188 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:03.188 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:03.188 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:03.188 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:03.188 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:03.188 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:03.188 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:03.188 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:03.188 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:03.188 ' 00:33:09.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:09.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:09.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:09.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:09.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:09.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:09.754 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:09.754 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:09.754 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:09.754 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:09.754 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:09.754 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:09.754 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:09.754 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2503543 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2503543 ']' 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2503543 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2503543 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2503543' 00:33:09.754 killing process with pid 2503543 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2503543 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2503543 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2503543 ']' 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2503543 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2503543 ']' 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2503543 00:33:09.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2503543) - No such process 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2503543 is not found' 00:33:09.754 Process with pid 2503543 is not found 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:09.754 00:33:09.754 real 0m17.340s 00:33:09.754 user 0m38.260s 00:33:09.754 sys 0m0.781s 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.754 11:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:09.754 ************************************ 00:33:09.754 END TEST spdkcli_nvmf_tcp 00:33:09.754 ************************************ 00:33:09.754 11:44:22 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:09.754 11:44:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:09.754 11:44:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.754 11:44:22 -- common/autotest_common.sh@10 -- # set +x 00:33:09.754 ************************************ 00:33:09.754 START TEST nvmf_identify_passthru 00:33:09.754 ************************************ 00:33:09.754 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:09.754 * Looking for test storage... 00:33:09.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:09.754 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:09.754 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:33:09.754 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:09.754 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:09.754 11:44:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:09.755 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.755 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:09.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.755 --rc genhtml_branch_coverage=1 00:33:09.755 --rc genhtml_function_coverage=1 00:33:09.755 --rc genhtml_legend=1 00:33:09.755 --rc geninfo_all_blocks=1 00:33:09.755 --rc geninfo_unexecuted_blocks=1 00:33:09.755 00:33:09.755 ' 00:33:09.755 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:09.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.755 --rc genhtml_branch_coverage=1 00:33:09.755 --rc genhtml_function_coverage=1 00:33:09.755 --rc genhtml_legend=1 00:33:09.755 --rc geninfo_all_blocks=1 00:33:09.755 --rc geninfo_unexecuted_blocks=1 00:33:09.755 00:33:09.755 ' 00:33:09.755 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:09.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.755 --rc genhtml_branch_coverage=1 00:33:09.755 --rc genhtml_function_coverage=1 00:33:09.755 --rc genhtml_legend=1 00:33:09.755 --rc geninfo_all_blocks=1 00:33:09.755 --rc geninfo_unexecuted_blocks=1 00:33:09.755 00:33:09.755 ' 00:33:09.755 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:09.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.755 --rc genhtml_branch_coverage=1 00:33:09.755 --rc genhtml_function_coverage=1 00:33:09.755 --rc genhtml_legend=1 00:33:09.755 --rc geninfo_all_blocks=1 00:33:09.755 --rc geninfo_unexecuted_blocks=1 00:33:09.755 00:33:09.755 ' 00:33:09.755 11:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.755 11:44:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.755 11:44:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.755 11:44:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.755 11:44:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:09.755 11:44:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:09.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.755 11:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.755 11:44:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.755 11:44:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.755 11:44:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.755 11:44:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.755 11:44:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:09.755 11:44:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.755 11:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.755 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:09.755 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.755 11:44:22 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.755 11:44:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:15.032 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:15.032 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:15.032 Found net devices under 0000:86:00.0: cvl_0_0 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:15.032 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:15.033 Found net devices under 0000:86:00.1: cvl_0_1 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:15.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:15.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:33:15.033 00:33:15.033 --- 10.0.0.2 ping statistics --- 00:33:15.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.033 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:15.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:15.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:33:15.033 00:33:15.033 --- 10.0.0.1 ping statistics --- 00:33:15.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.033 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:15.033 11:44:28 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:15.293 11:44:28 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.293 11:44:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:15.293 11:44:28 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:15.293 11:44:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:15.293 11:44:28 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:15.293 11:44:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:15.293 11:44:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:15.293 11:44:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:19.482 11:44:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:33:19.482 11:44:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:19.482 11:44:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:19.482 11:44:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:23.671 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:23.671 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:23.671 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:23.671 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2510632 00:33:23.671 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:23.671 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:23.671 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2510632 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2510632 ']' 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:23.671 [2024-11-19 11:44:37.283694] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:33:23.671 [2024-11-19 11:44:37.283741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.671 [2024-11-19 11:44:37.362716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:23.671 [2024-11-19 11:44:37.405907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.671 [2024-11-19 11:44:37.405955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.671 [2024-11-19 11:44:37.405962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.671 [2024-11-19 11:44:37.405968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.671 [2024-11-19 11:44:37.405974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.671 [2024-11-19 11:44:37.407553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.671 [2024-11-19 11:44:37.407665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:23.671 [2024-11-19 11:44:37.407774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.671 [2024-11-19 11:44:37.407774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:23.671 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:23.671 INFO: Log level set to 20 00:33:23.671 INFO: Requests: 00:33:23.671 { 00:33:23.671 "jsonrpc": "2.0", 00:33:23.671 "method": "nvmf_set_config", 00:33:23.671 "id": 1, 00:33:23.671 "params": { 00:33:23.671 "admin_cmd_passthru": { 00:33:23.671 "identify_ctrlr": true 00:33:23.671 } 00:33:23.671 } 00:33:23.671 } 00:33:23.671 00:33:23.671 INFO: response: 00:33:23.671 { 00:33:23.671 "jsonrpc": "2.0", 00:33:23.671 "id": 1, 00:33:23.671 "result": true 00:33:23.671 } 00:33:23.671 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.671 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.671 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:23.671 INFO: Setting log level to 20 00:33:23.671 INFO: Setting log level to 20 00:33:23.671 INFO: Log level set to 20 00:33:23.671 INFO: Log level set to 20 00:33:23.671 INFO: Requests: 00:33:23.671 { 00:33:23.671 "jsonrpc": "2.0", 00:33:23.671 "method": "framework_start_init", 00:33:23.671 "id": 1 00:33:23.671 } 00:33:23.671 00:33:23.671 INFO: Requests: 00:33:23.671 { 00:33:23.671 "jsonrpc": "2.0", 00:33:23.671 "method": "framework_start_init", 00:33:23.671 "id": 1 00:33:23.671 } 00:33:23.671 00:33:23.929 [2024-11-19 11:44:37.514638] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:23.929 INFO: response: 00:33:23.929 { 00:33:23.929 "jsonrpc": "2.0", 00:33:23.929 "id": 1, 00:33:23.929 "result": true 00:33:23.929 } 00:33:23.929 00:33:23.929 INFO: response: 00:33:23.929 { 00:33:23.929 "jsonrpc": "2.0", 00:33:23.929 "id": 1, 00:33:23.929 "result": true 00:33:23.929 } 00:33:23.929 00:33:23.929 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.929 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:23.929 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.929 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:23.929 INFO: Setting log level to 40 00:33:23.929 INFO: Setting log level to 40 00:33:23.929 INFO: Setting log level to 40 00:33:23.929 [2024-11-19 11:44:37.527994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.929 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.929 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:23.929 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:23.929 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:23.929 11:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:23.929 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.929 11:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.203 Nvme0n1 00:33:27.203 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.203 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:27.203 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.203 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.203 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.203 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:27.203 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.203 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.203 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.203 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.203 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.204 [2024-11-19 11:44:40.443620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.204 [ 00:33:27.204 { 00:33:27.204 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:27.204 "subtype": "Discovery", 00:33:27.204 "listen_addresses": [], 00:33:27.204 "allow_any_host": true, 00:33:27.204 "hosts": [] 00:33:27.204 }, 00:33:27.204 { 00:33:27.204 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:27.204 "subtype": "NVMe", 00:33:27.204 "listen_addresses": [ 00:33:27.204 { 00:33:27.204 "trtype": "TCP", 00:33:27.204 "adrfam": "IPv4", 00:33:27.204 "traddr": "10.0.0.2", 00:33:27.204 "trsvcid": "4420" 00:33:27.204 } 00:33:27.204 ], 00:33:27.204 "allow_any_host": true, 00:33:27.204 "hosts": [], 00:33:27.204 "serial_number": "SPDK00000000000001", 00:33:27.204 "model_number": "SPDK bdev Controller", 00:33:27.204 "max_namespaces": 1, 00:33:27.204 "min_cntlid": 1, 00:33:27.204 "max_cntlid": 65519, 00:33:27.204 "namespaces": [ 00:33:27.204 { 00:33:27.204 "nsid": 1, 00:33:27.204 "bdev_name": "Nvme0n1", 00:33:27.204 "name": "Nvme0n1", 00:33:27.204 "nguid": "0B8729452C40462981B14378B7D1762B", 00:33:27.204 "uuid": "0b872945-2c40-4629-81b1-4378b7d1762b" 00:33:27.204 } 00:33:27.204 ] 00:33:27.204 } 00:33:27.204 ] 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:27.204 11:44:40 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:27.204 11:44:40 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:27.204 11:44:40 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:27.204 11:44:40 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:27.204 11:44:40 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:27.204 11:44:40 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:27.204 11:44:40 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:27.204 rmmod nvme_tcp 00:33:27.204 rmmod nvme_fabrics 00:33:27.204 rmmod nvme_keyring 00:33:27.204 11:44:40 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:27.204 11:44:40 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:27.204 11:44:40 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:27.204 11:44:40 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2510632 ']' 00:33:27.204 11:44:40 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2510632 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2510632 ']' 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2510632 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2510632 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2510632' 00:33:27.204 killing process with pid 2510632 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2510632 00:33:27.204 11:44:40 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2510632 00:33:29.102 11:44:42 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:29.103 11:44:42 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:29.103 11:44:42 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:29.103 11:44:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:29.103 11:44:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:29.103 11:44:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:29.103 11:44:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:29.103 11:44:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:29.103 11:44:42 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:29.103 11:44:42 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.103 11:44:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:29.103 11:44:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.010 11:44:44 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:31.010 00:33:31.010 real 0m21.716s 00:33:31.010 user 0m26.503s 00:33:31.010 sys 0m6.141s 00:33:31.010 11:44:44 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.010 11:44:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.010 ************************************ 00:33:31.010 END TEST nvmf_identify_passthru 00:33:31.010 ************************************ 00:33:31.010 11:44:44 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:31.010 11:44:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:31.010 11:44:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:31.010 11:44:44 -- common/autotest_common.sh@10 -- # set +x 00:33:31.010 ************************************ 00:33:31.010 START TEST nvmf_dif 00:33:31.010 ************************************ 00:33:31.010 11:44:44 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:31.010 * Looking for test storage... 00:33:31.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:31.010 11:44:44 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:31.010 11:44:44 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:31.010 11:44:44 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:31.010 11:44:44 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:31.010 11:44:44 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.010 11:44:44 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:31.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.010 --rc genhtml_branch_coverage=1 00:33:31.010 --rc genhtml_function_coverage=1 00:33:31.010 --rc genhtml_legend=1 00:33:31.010 --rc geninfo_all_blocks=1 00:33:31.010 --rc geninfo_unexecuted_blocks=1 00:33:31.010 00:33:31.010 ' 00:33:31.010 11:44:44 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:31.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.010 --rc genhtml_branch_coverage=1 00:33:31.010 --rc genhtml_function_coverage=1 00:33:31.010 --rc genhtml_legend=1 00:33:31.010 --rc geninfo_all_blocks=1 00:33:31.010 --rc geninfo_unexecuted_blocks=1 00:33:31.010 00:33:31.010 ' 00:33:31.010 11:44:44 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:31.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.010 --rc genhtml_branch_coverage=1 00:33:31.010 --rc genhtml_function_coverage=1 00:33:31.010 --rc genhtml_legend=1 00:33:31.010 --rc geninfo_all_blocks=1 00:33:31.010 --rc geninfo_unexecuted_blocks=1 00:33:31.010 00:33:31.010 ' 00:33:31.010 11:44:44 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:31.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.010 --rc genhtml_branch_coverage=1 00:33:31.010 --rc genhtml_function_coverage=1 00:33:31.010 --rc genhtml_legend=1 00:33:31.010 --rc geninfo_all_blocks=1 00:33:31.010 --rc geninfo_unexecuted_blocks=1 00:33:31.010 00:33:31.010 ' 00:33:31.010 11:44:44 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.010 11:44:44 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.010 11:44:44 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.010 11:44:44 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.010 11:44:44 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.010 11:44:44 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.010 11:44:44 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:31.011 11:44:44 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:31.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:31.011 11:44:44 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:31.011 11:44:44 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:31.011 11:44:44 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:31.011 11:44:44 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:31.011 11:44:44 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.011 11:44:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:31.011 11:44:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:31.011 11:44:44 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:31.011 11:44:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:37.583 11:44:50 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:37.584 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:37.584 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:37.584 Found net devices under 0000:86:00.0: cvl_0_0 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:37.584 Found net devices under 0000:86:00.1: cvl_0_1 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:37.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:37.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:33:37.584 00:33:37.584 --- 10.0.0.2 ping statistics --- 00:33:37.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.584 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:37.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:37.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:33:37.584 00:33:37.584 --- 10.0.0.1 ping statistics --- 00:33:37.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.584 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:37.584 11:44:50 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:39.619 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:39.619 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:39.619 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:39.878 11:44:53 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.878 11:44:53 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:39.878 11:44:53 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:39.878 11:44:53 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.878 11:44:53 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:39.878 11:44:53 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:39.878 11:44:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:39.878 11:44:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:39.878 11:44:53 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:39.878 11:44:53 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:39.878 11:44:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:39.878 11:44:53 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2516130 00:33:39.878 11:44:53 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2516130 00:33:39.878 11:44:53 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:39.878 11:44:53 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2516130 ']' 00:33:39.878 11:44:53 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.878 11:44:53 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.878 11:44:53 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.878 11:44:53 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.878 11:44:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:39.878 [2024-11-19 11:44:53.493941] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:33:39.878 [2024-11-19 11:44:53.493993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.878 [2024-11-19 11:44:53.573773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.878 [2024-11-19 11:44:53.614562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.878 [2024-11-19 11:44:53.614599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.878 [2024-11-19 11:44:53.614606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.878 [2024-11-19 11:44:53.614612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.878 [2024-11-19 11:44:53.614617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.878 [2024-11-19 11:44:53.615187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.137 11:44:53 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.137 11:44:53 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:40.137 11:44:53 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:40.137 11:44:53 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:40.137 11:44:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:40.137 11:44:53 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.137 11:44:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:40.137 11:44:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:40.137 11:44:53 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.137 11:44:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:40.137 [2024-11-19 11:44:53.750688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.137 11:44:53 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.137 11:44:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:40.137 11:44:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:40.138 11:44:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.138 11:44:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:40.138 ************************************ 00:33:40.138 START TEST fio_dif_1_default 00:33:40.138 ************************************ 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:40.138 bdev_null0 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:40.138 [2024-11-19 11:44:53.823010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.138 { 00:33:40.138 "params": { 00:33:40.138 "name": "Nvme$subsystem", 00:33:40.138 "trtype": "$TEST_TRANSPORT", 00:33:40.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.138 "adrfam": "ipv4", 00:33:40.138 "trsvcid": "$NVMF_PORT", 00:33:40.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.138 "hdgst": ${hdgst:-false}, 00:33:40.138 "ddgst": ${ddgst:-false} 00:33:40.138 }, 00:33:40.138 "method": "bdev_nvme_attach_controller" 00:33:40.138 } 00:33:40.138 EOF 00:33:40.138 )") 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:40.138 "params": { 00:33:40.138 "name": "Nvme0", 00:33:40.138 "trtype": "tcp", 00:33:40.138 "traddr": "10.0.0.2", 00:33:40.138 "adrfam": "ipv4", 00:33:40.138 "trsvcid": "4420", 00:33:40.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:40.138 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:40.138 "hdgst": false, 00:33:40.138 "ddgst": false 00:33:40.138 }, 00:33:40.138 "method": "bdev_nvme_attach_controller" 00:33:40.138 }' 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:40.138 11:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.710 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:40.710 fio-3.35 00:33:40.710 Starting 1 thread 00:33:52.913 00:33:52.913 filename0: (groupid=0, jobs=1): err= 0: pid=2516463: Tue Nov 19 11:45:04 2024 00:33:52.913 read: IOPS=146, BW=586KiB/s (600kB/s)(5872KiB/10025msec) 00:33:52.913 slat (nsec): min=5855, max=26369, avg=6227.36, stdev=1110.02 00:33:52.913 clat (usec): min=380, max=43759, avg=27298.39, stdev=19312.80 00:33:52.913 lat (usec): min=386, max=43785, avg=27304.61, stdev=19312.80 00:33:52.913 clat percentiles (usec): 00:33:52.913 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 420], 00:33:52.913 | 30.00th=[ 553], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:33:52.913 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:52.913 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:33:52.913 | 99.99th=[43779] 00:33:52.913 bw ( KiB/s): min= 384, max= 896, per=99.87%, avg=585.60, stdev=198.38, samples=20 00:33:52.913 iops : min= 96, max= 224, avg=146.40, stdev=49.59, samples=20 00:33:52.913 lat (usec) : 500=29.09%, 750=4.97% 00:33:52.913 lat (msec) : 50=65.94% 00:33:52.913 cpu : usr=92.27%, sys=7.42%, ctx=6, majf=0, minf=0 00:33:52.913 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.913 issued rwts: total=1468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.913 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:52.913 00:33:52.913 Run status group 0 (all jobs): 00:33:52.913 READ: bw=586KiB/s (600kB/s), 586KiB/s-586KiB/s (600kB/s-600kB/s), io=5872KiB (6013kB), run=10025-10025msec 00:33:52.913 11:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:52.913 11:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:52.913 11:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.913 11:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:52.913 11:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:52.913 11:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.914 00:33:52.914 real 0m11.089s 00:33:52.914 user 0m15.996s 00:33:52.914 sys 0m1.031s 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 ************************************ 00:33:52.914 END TEST fio_dif_1_default 00:33:52.914 ************************************ 00:33:52.914 11:45:04 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:52.914 11:45:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:52.914 11:45:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:52.914 11:45:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 ************************************ 00:33:52.914 START TEST fio_dif_1_multi_subsystems 00:33:52.914 ************************************ 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 bdev_null0 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 [2024-11-19 11:45:04.990338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.914 11:45:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 bdev_null1 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.914 { 00:33:52.914 "params": { 00:33:52.914 "name": "Nvme$subsystem", 00:33:52.914 "trtype": "$TEST_TRANSPORT", 00:33:52.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.914 "adrfam": "ipv4", 00:33:52.914 "trsvcid": "$NVMF_PORT", 00:33:52.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.914 "hdgst": ${hdgst:-false}, 00:33:52.914 "ddgst": ${ddgst:-false} 00:33:52.914 }, 00:33:52.914 "method": "bdev_nvme_attach_controller" 00:33:52.914 } 00:33:52.914 EOF 00:33:52.914 )") 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.914 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.914 { 00:33:52.914 "params": { 00:33:52.915 "name": "Nvme$subsystem", 00:33:52.915 "trtype": "$TEST_TRANSPORT", 00:33:52.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.915 "adrfam": "ipv4", 00:33:52.915 "trsvcid": "$NVMF_PORT", 00:33:52.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.915 "hdgst": ${hdgst:-false}, 00:33:52.915 "ddgst": ${ddgst:-false} 00:33:52.915 }, 00:33:52.915 "method": "bdev_nvme_attach_controller" 00:33:52.915 } 00:33:52.915 EOF 00:33:52.915 )") 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.915 "params": { 00:33:52.915 "name": "Nvme0", 00:33:52.915 "trtype": "tcp", 00:33:52.915 "traddr": "10.0.0.2", 00:33:52.915 "adrfam": "ipv4", 00:33:52.915 "trsvcid": "4420", 00:33:52.915 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.915 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.915 "hdgst": false, 00:33:52.915 "ddgst": false 00:33:52.915 }, 00:33:52.915 "method": "bdev_nvme_attach_controller" 00:33:52.915 },{ 00:33:52.915 "params": { 00:33:52.915 "name": "Nvme1", 00:33:52.915 "trtype": "tcp", 00:33:52.915 "traddr": "10.0.0.2", 00:33:52.915 "adrfam": "ipv4", 00:33:52.915 "trsvcid": "4420", 00:33:52.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:52.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:52.915 "hdgst": false, 00:33:52.915 "ddgst": false 00:33:52.915 }, 00:33:52.915 "method": "bdev_nvme_attach_controller" 00:33:52.915 }' 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:52.915 11:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.915 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:52.915 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:52.915 fio-3.35 00:33:52.915 Starting 2 threads 00:34:02.896 00:34:02.896 filename0: (groupid=0, jobs=1): err= 0: pid=2518720: Tue Nov 19 11:45:16 2024 00:34:02.896 read: IOPS=225, BW=902KiB/s (924kB/s)(9024KiB/10005msec) 00:34:02.896 slat (nsec): min=6088, max=49060, avg=9509.28, stdev=6841.00 00:34:02.896 clat (usec): min=380, max=42604, avg=17710.14, stdev=20261.86 00:34:02.896 lat (usec): min=387, max=42612, avg=17719.65, stdev=20260.38 00:34:02.896 clat percentiles (usec): 00:34:02.896 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 429], 00:34:02.896 | 30.00th=[ 437], 40.00th=[ 449], 50.00th=[ 603], 60.00th=[40633], 00:34:02.896 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:34:02.896 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:02.896 | 99.99th=[42730] 00:34:02.896 bw ( KiB/s): min= 672, max= 1216, per=50.33%, avg=907.79, stdev=130.73, samples=19 00:34:02.896 iops : min= 168, max= 304, avg=226.95, stdev=32.68, samples=19 00:34:02.896 lat (usec) : 500=47.61%, 750=10.37% 00:34:02.896 lat (msec) : 50=42.02% 00:34:02.896 cpu : usr=98.46%, sys=1.27%, ctx=20, majf=0, minf=124 00:34:02.896 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.896 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.896 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:02.896 filename1: (groupid=0, jobs=1): err= 0: pid=2518721: Tue Nov 19 11:45:16 2024 00:34:02.896 read: IOPS=225, BW=901KiB/s (923kB/s)(9024KiB/10015msec) 00:34:02.896 slat (nsec): min=5864, max=77816, avg=8673.46, stdev=5331.18 00:34:02.896 clat (usec): min=376, max=42687, avg=17731.14, stdev=20222.73 00:34:02.896 lat (usec): min=383, max=42694, avg=17739.81, stdev=20221.55 00:34:02.896 clat percentiles (usec): 00:34:02.896 | 1.00th=[ 396], 5.00th=[ 412], 10.00th=[ 429], 20.00th=[ 465], 00:34:02.896 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 627], 60.00th=[40633], 00:34:02.896 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:34:02.896 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:02.896 | 99.99th=[42730] 00:34:02.896 bw ( KiB/s): min= 672, max= 1088, per=49.94%, avg=900.80, stdev=108.53, samples=20 00:34:02.896 iops : min= 168, max= 272, avg=225.20, stdev=27.13, samples=20 00:34:02.896 lat (usec) : 500=23.45%, 750=34.40%, 1000=0.13% 00:34:02.896 lat (msec) : 50=42.02% 00:34:02.896 cpu : usr=98.16%, sys=1.53%, ctx=32, majf=0, minf=92 00:34:02.896 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.896 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.896 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:02.896 00:34:02.896 Run status group 0 (all jobs): 00:34:02.896 READ: bw=1802KiB/s (1845kB/s), 901KiB/s-902KiB/s (923kB/s-924kB/s), io=17.6MiB (18.5MB), run=10005-10015msec 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.896 00:34:02.896 real 0m11.540s 00:34:02.896 user 0m26.837s 00:34:02.896 sys 0m0.681s 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.896 11:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.896 ************************************ 00:34:02.896 END TEST fio_dif_1_multi_subsystems 00:34:02.896 ************************************ 00:34:02.896 11:45:16 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:02.896 11:45:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:02.896 11:45:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:02.896 11:45:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:02.896 ************************************ 00:34:02.896 START TEST fio_dif_rand_params 00:34:02.896 ************************************ 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.896 bdev_null0 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:02.896 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.897 [2024-11-19 11:45:16.605536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.897 { 00:34:02.897 "params": { 00:34:02.897 "name": "Nvme$subsystem", 00:34:02.897 "trtype": "$TEST_TRANSPORT", 00:34:02.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.897 "adrfam": "ipv4", 00:34:02.897 "trsvcid": "$NVMF_PORT", 00:34:02.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.897 "hdgst": ${hdgst:-false}, 00:34:02.897 "ddgst": ${ddgst:-false} 00:34:02.897 }, 00:34:02.897 "method": "bdev_nvme_attach_controller" 00:34:02.897 } 00:34:02.897 EOF 00:34:02.897 )") 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:02.897 "params": { 00:34:02.897 "name": "Nvme0", 00:34:02.897 "trtype": "tcp", 00:34:02.897 "traddr": "10.0.0.2", 00:34:02.897 "adrfam": "ipv4", 00:34:02.897 "trsvcid": "4420", 00:34:02.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:02.897 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:02.897 "hdgst": false, 00:34:02.897 "ddgst": false 00:34:02.897 }, 00:34:02.897 "method": "bdev_nvme_attach_controller" 00:34:02.897 }' 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:02.897 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:03.175 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:03.175 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:03.175 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:03.175 11:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:03.439 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:03.439 ... 00:34:03.439 fio-3.35 00:34:03.439 Starting 3 threads 00:34:10.007 00:34:10.007 filename0: (groupid=0, jobs=1): err= 0: pid=2520906: Tue Nov 19 11:45:22 2024 00:34:10.007 read: IOPS=331, BW=41.4MiB/s (43.5MB/s)(209MiB/5046msec) 00:34:10.007 slat (nsec): min=6109, max=41593, avg=12729.50, stdev=4961.71 00:34:10.007 clat (usec): min=3358, max=50974, avg=9007.89, stdev=5309.51 00:34:10.007 lat (usec): min=3364, max=50989, avg=9020.62, stdev=5309.34 00:34:10.007 clat percentiles (usec): 00:34:10.007 | 1.00th=[ 3851], 5.00th=[ 6128], 10.00th=[ 7046], 20.00th=[ 7635], 00:34:10.007 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:34:10.007 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10290], 00:34:10.007 | 99.00th=[47973], 99.50th=[48497], 99.90th=[51119], 99.95th=[51119], 00:34:10.007 | 99.99th=[51119] 00:34:10.007 bw ( KiB/s): min=27648, max=48128, per=36.20%, avg=42777.60, stdev=5762.94, samples=10 00:34:10.007 iops : min= 216, max= 376, avg=334.20, stdev=45.02, samples=10 00:34:10.007 lat (msec) : 4=1.85%, 10=90.68%, 20=5.74%, 50=1.61%, 100=0.12% 00:34:10.007 cpu : usr=95.02%, sys=4.68%, ctx=13, majf=0, minf=45 00:34:10.007 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.008 issued rwts: total=1673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.008 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.008 filename0: (groupid=0, jobs=1): err= 0: pid=2520907: Tue Nov 19 11:45:22 2024 00:34:10.008 read: IOPS=293, BW=36.7MiB/s (38.4MB/s)(185MiB/5043msec) 00:34:10.008 slat (nsec): min=6331, max=69980, avg=14727.80, stdev=6001.00 00:34:10.008 clat (usec): min=3271, max=50304, avg=10183.15, stdev=4610.65 00:34:10.008 lat (usec): min=3277, max=50331, avg=10197.88, stdev=4610.97 00:34:10.008 clat percentiles (usec): 00:34:10.008 | 1.00th=[ 3490], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 8455], 00:34:10.008 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:34:10.008 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11731], 95.00th=[12125], 00:34:10.008 | 99.00th=[44827], 99.50th=[46400], 99.90th=[50070], 99.95th=[50070], 00:34:10.008 | 99.99th=[50070] 00:34:10.008 bw ( KiB/s): min=33536, max=43264, per=32.00%, avg=37811.20, stdev=2534.41, samples=10 00:34:10.008 iops : min= 262, max= 338, avg=295.40, stdev=19.80, samples=10 00:34:10.008 lat (msec) : 4=1.89%, 10=45.98%, 20=50.78%, 50=1.15%, 100=0.20% 00:34:10.008 cpu : usr=95.42%, sys=4.26%, ctx=12, majf=0, minf=91 00:34:10.008 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.008 issued rwts: total=1479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.008 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.008 filename0: (groupid=0, jobs=1): err= 0: pid=2520908: Tue Nov 19 11:45:22 2024 00:34:10.008 read: IOPS=301, BW=37.6MiB/s (39.5MB/s)(188MiB/5002msec) 00:34:10.008 slat (nsec): min=6188, max=36306, avg=12723.04, stdev=4586.70 00:34:10.008 clat (usec): min=3433, max=52245, avg=9947.97, stdev=5559.77 00:34:10.008 lat (usec): min=3439, max=52257, avg=9960.69, stdev=5559.65 00:34:10.008 clat percentiles (usec): 00:34:10.008 | 1.00th=[ 3490], 5.00th=[ 6194], 10.00th=[ 7504], 20.00th=[ 8291], 00:34:10.008 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:34:10.008 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11076], 95.00th=[11600], 00:34:10.008 | 99.00th=[46400], 99.50th=[49546], 99.90th=[51643], 99.95th=[52167], 00:34:10.008 | 99.99th=[52167] 00:34:10.008 bw ( KiB/s): min=30208, max=42496, per=32.45%, avg=38343.11, stdev=4262.18, samples=9 00:34:10.008 iops : min= 236, max= 332, avg=299.56, stdev=33.30, samples=9 00:34:10.008 lat (msec) : 4=2.59%, 10=64.94%, 20=30.48%, 50=1.66%, 100=0.33% 00:34:10.008 cpu : usr=95.40%, sys=4.30%, ctx=10, majf=0, minf=50 00:34:10.008 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.008 issued rwts: total=1506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.008 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.008 00:34:10.008 Run status group 0 (all jobs): 00:34:10.008 READ: bw=115MiB/s (121MB/s), 36.7MiB/s-41.4MiB/s (38.4MB/s-43.5MB/s), io=582MiB (611MB), run=5002-5046msec 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 bdev_null0 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 [2024-11-19 11:45:22.933367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 bdev_null1 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 bdev_null2 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:10.008 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.008 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.008 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.008 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:10.008 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:10.009 { 00:34:10.009 "params": { 00:34:10.009 "name": "Nvme$subsystem", 00:34:10.009 "trtype": "$TEST_TRANSPORT", 00:34:10.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.009 "adrfam": "ipv4", 00:34:10.009 "trsvcid": "$NVMF_PORT", 00:34:10.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.009 "hdgst": ${hdgst:-false}, 00:34:10.009 "ddgst": ${ddgst:-false} 00:34:10.009 }, 00:34:10.009 "method": "bdev_nvme_attach_controller" 00:34:10.009 } 00:34:10.009 EOF 00:34:10.009 )") 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:10.009 { 00:34:10.009 "params": { 00:34:10.009 "name": "Nvme$subsystem", 00:34:10.009 "trtype": "$TEST_TRANSPORT", 00:34:10.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.009 "adrfam": "ipv4", 00:34:10.009 "trsvcid": "$NVMF_PORT", 00:34:10.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.009 "hdgst": ${hdgst:-false}, 00:34:10.009 "ddgst": ${ddgst:-false} 00:34:10.009 }, 00:34:10.009 "method": "bdev_nvme_attach_controller" 00:34:10.009 } 00:34:10.009 EOF 00:34:10.009 )") 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:10.009 { 00:34:10.009 "params": { 00:34:10.009 "name": "Nvme$subsystem", 00:34:10.009 "trtype": "$TEST_TRANSPORT", 00:34:10.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.009 "adrfam": "ipv4", 00:34:10.009 "trsvcid": "$NVMF_PORT", 00:34:10.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.009 "hdgst": ${hdgst:-false}, 00:34:10.009 "ddgst": ${ddgst:-false} 00:34:10.009 }, 00:34:10.009 "method": "bdev_nvme_attach_controller" 00:34:10.009 } 00:34:10.009 EOF 00:34:10.009 )") 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:10.009 "params": { 00:34:10.009 "name": "Nvme0", 00:34:10.009 "trtype": "tcp", 00:34:10.009 "traddr": "10.0.0.2", 00:34:10.009 "adrfam": "ipv4", 00:34:10.009 "trsvcid": "4420", 00:34:10.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.009 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:10.009 "hdgst": false, 00:34:10.009 "ddgst": false 00:34:10.009 }, 00:34:10.009 "method": "bdev_nvme_attach_controller" 00:34:10.009 },{ 00:34:10.009 "params": { 00:34:10.009 "name": "Nvme1", 00:34:10.009 "trtype": "tcp", 00:34:10.009 "traddr": "10.0.0.2", 00:34:10.009 "adrfam": "ipv4", 00:34:10.009 "trsvcid": "4420", 00:34:10.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:10.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:10.009 "hdgst": false, 00:34:10.009 "ddgst": false 00:34:10.009 }, 00:34:10.009 "method": "bdev_nvme_attach_controller" 00:34:10.009 },{ 00:34:10.009 "params": { 00:34:10.009 "name": "Nvme2", 00:34:10.009 "trtype": "tcp", 00:34:10.009 "traddr": "10.0.0.2", 00:34:10.009 "adrfam": "ipv4", 00:34:10.009 "trsvcid": "4420", 00:34:10.009 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:10.009 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:10.009 "hdgst": false, 00:34:10.009 "ddgst": false 00:34:10.009 }, 00:34:10.009 "method": "bdev_nvme_attach_controller" 00:34:10.009 }' 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:10.009 11:45:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.009 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:10.009 ... 00:34:10.009 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:10.009 ... 00:34:10.009 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:10.009 ... 00:34:10.009 fio-3.35 00:34:10.009 Starting 24 threads 00:34:22.204 00:34:22.204 filename0: (groupid=0, jobs=1): err= 0: pid=2522044: Tue Nov 19 11:45:34 2024 00:34:22.204 read: IOPS=579, BW=2318KiB/s (2374kB/s)(22.7MiB/10006msec) 00:34:22.204 slat (nsec): min=6777, max=84616, avg=21521.77, stdev=16887.46 00:34:22.204 clat (usec): min=1631, max=30275, avg=27437.01, stdev=3591.44 00:34:22.204 lat (usec): min=1640, max=30303, avg=27458.53, stdev=3591.88 00:34:22.204 clat percentiles (usec): 00:34:22.204 | 1.00th=[ 2671], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:22.204 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.204 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.204 | 99.00th=[29230], 99.50th=[29754], 99.90th=[30016], 99.95th=[30278], 00:34:22.204 | 99.99th=[30278] 00:34:22.204 bw ( KiB/s): min= 2176, max= 3128, per=4.24%, avg=2313.20, stdev=199.90, samples=20 00:34:22.204 iops : min= 544, max= 782, avg=578.30, stdev=49.97, samples=20 00:34:22.204 lat (msec) : 2=0.24%, 4=1.26%, 10=0.28%, 20=1.10%, 50=97.12% 00:34:22.204 cpu : usr=98.47%, sys=1.07%, ctx=53, majf=0, minf=59 00:34:22.204 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:22.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.204 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.204 issued rwts: total=5799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.204 filename0: (groupid=0, jobs=1): err= 0: pid=2522045: Tue Nov 19 11:45:34 2024 00:34:22.204 read: IOPS=564, BW=2256KiB/s (2310kB/s)(22.1MiB/10044msec) 00:34:22.204 slat (nsec): min=7010, max=85938, avg=28415.09, stdev=17314.69 00:34:22.204 clat (usec): min=19560, max=52007, avg=27973.17, stdev=1427.99 00:34:22.204 lat (usec): min=19574, max=52030, avg=28001.59, stdev=1427.06 00:34:22.204 clat percentiles (usec): 00:34:22.204 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:22.204 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.204 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.205 | 99.00th=[29492], 99.50th=[29754], 99.90th=[52167], 99.95th=[52167], 00:34:22.205 | 99.99th=[52167] 00:34:22.205 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2263.58, stdev=74.55, samples=19 00:34:22.205 iops : min= 512, max= 576, avg=565.89, stdev=18.64, samples=19 00:34:22.205 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:34:22.205 cpu : usr=98.42%, sys=1.16%, ctx=31, majf=0, minf=27 00:34:22.205 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 issued rwts: total=5665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.205 filename0: (groupid=0, jobs=1): err= 0: pid=2522046: Tue Nov 19 11:45:34 2024 00:34:22.205 read: IOPS=567, BW=2270KiB/s (2325kB/s)(22.2MiB/10008msec) 00:34:22.205 slat (nsec): min=4192, max=74528, avg=21189.53, stdev=6949.03 00:34:22.205 clat (usec): min=16402, max=52048, avg=27995.69, stdev=1117.28 00:34:22.205 lat (usec): min=16420, max=52060, avg=28016.88, stdev=1116.55 00:34:22.205 clat percentiles (usec): 00:34:22.205 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.205 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.205 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.205 | 99.00th=[29492], 99.50th=[31851], 99.90th=[38536], 99.95th=[38536], 00:34:22.205 | 99.99th=[52167] 00:34:22.205 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2263.58, stdev=74.55, samples=19 00:34:22.205 iops : min= 512, max= 576, avg=565.89, stdev=18.64, samples=19 00:34:22.205 lat (msec) : 20=0.28%, 50=99.68%, 100=0.04% 00:34:22.205 cpu : usr=98.38%, sys=1.26%, ctx=13, majf=0, minf=28 00:34:22.205 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.205 filename0: (groupid=0, jobs=1): err= 0: pid=2522048: Tue Nov 19 11:45:34 2024 00:34:22.205 read: IOPS=566, BW=2265KiB/s (2320kB/s)(22.1MiB/10001msec) 00:34:22.205 slat (usec): min=6, max=100, avg=38.52, stdev=17.80 00:34:22.205 clat (usec): min=16440, max=61650, avg=27903.23, stdev=1957.25 00:34:22.205 lat (usec): min=16447, max=61660, avg=27941.75, stdev=1956.24 00:34:22.205 clat percentiles (usec): 00:34:22.205 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:34:22.205 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:34:22.205 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:34:22.205 | 99.00th=[29230], 99.50th=[31851], 99.90th=[61604], 99.95th=[61604], 00:34:22.205 | 99.99th=[61604] 00:34:22.205 bw ( KiB/s): min= 2052, max= 2304, per=4.13%, avg=2257.05, stdev=75.85, samples=19 00:34:22.205 iops : min= 513, max= 576, avg=564.26, stdev=18.96, samples=19 00:34:22.205 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:34:22.205 cpu : usr=98.52%, sys=1.10%, ctx=14, majf=0, minf=30 00:34:22.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.205 filename0: (groupid=0, jobs=1): err= 0: pid=2522049: Tue Nov 19 11:45:34 2024 00:34:22.205 read: IOPS=568, BW=2275KiB/s (2329kB/s)(22.2MiB/10017msec) 00:34:22.205 slat (nsec): min=9681, max=67001, avg=22138.01, stdev=6599.90 00:34:22.205 clat (usec): min=15050, max=30223, avg=27940.46, stdev=907.07 00:34:22.205 lat (usec): min=15067, max=30241, avg=27962.60, stdev=907.28 00:34:22.205 clat percentiles (usec): 00:34:22.205 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.205 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.205 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.205 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30016], 99.95th=[30278], 00:34:22.205 | 99.99th=[30278] 00:34:22.205 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2270.32, stdev=57.91, samples=19 00:34:22.205 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:22.205 lat (msec) : 20=0.28%, 50=99.72% 00:34:22.205 cpu : usr=98.67%, sys=0.97%, ctx=17, majf=0, minf=42 00:34:22.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.205 filename0: (groupid=0, jobs=1): err= 0: pid=2522050: Tue Nov 19 11:45:34 2024 00:34:22.205 read: IOPS=567, BW=2269KiB/s (2323kB/s)(22.2MiB/10014msec) 00:34:22.205 slat (nsec): min=7188, max=85743, avg=28821.86, stdev=17312.37 00:34:22.205 clat (usec): min=18719, max=37367, avg=27945.97, stdev=838.70 00:34:22.205 lat (usec): min=18728, max=37379, avg=27974.79, stdev=836.96 00:34:22.205 clat percentiles (usec): 00:34:22.205 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:22.205 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.205 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.205 | 99.00th=[29492], 99.50th=[30016], 99.90th=[36439], 99.95th=[36963], 00:34:22.205 | 99.99th=[37487] 00:34:22.205 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2264.65, stdev=72.72, samples=20 00:34:22.205 iops : min= 512, max= 576, avg=566.15, stdev=18.18, samples=20 00:34:22.205 lat (msec) : 20=0.35%, 50=99.65% 00:34:22.205 cpu : usr=98.49%, sys=1.16%, ctx=13, majf=0, minf=50 00:34:22.205 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.205 filename0: (groupid=0, jobs=1): err= 0: pid=2522051: Tue Nov 19 11:45:34 2024 00:34:22.205 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:34:22.205 slat (nsec): min=6906, max=56934, avg=17048.00, stdev=7382.75 00:34:22.205 clat (usec): min=12068, max=30214, avg=27959.44, stdev=1257.06 00:34:22.205 lat (usec): min=12086, max=30230, avg=27976.49, stdev=1256.16 00:34:22.205 clat percentiles (usec): 00:34:22.205 | 1.00th=[26346], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:22.205 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.205 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.205 | 99.00th=[29492], 99.50th=[30016], 99.90th=[30278], 99.95th=[30278], 00:34:22.205 | 99.99th=[30278] 00:34:22.205 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2277.05, stdev=53.61, samples=19 00:34:22.205 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:34:22.205 lat (msec) : 20=0.84%, 50=99.16% 00:34:22.205 cpu : usr=98.44%, sys=1.22%, ctx=14, majf=0, minf=65 00:34:22.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.205 filename0: (groupid=0, jobs=1): err= 0: pid=2522052: Tue Nov 19 11:45:34 2024 00:34:22.205 read: IOPS=567, BW=2269KiB/s (2324kB/s)(22.2MiB/10012msec) 00:34:22.205 slat (nsec): min=4291, max=75831, avg=21030.71, stdev=6834.16 00:34:22.205 clat (usec): min=16690, max=41796, avg=28020.10, stdev=1090.41 00:34:22.205 lat (usec): min=16766, max=41809, avg=28041.13, stdev=1089.71 00:34:22.205 clat percentiles (usec): 00:34:22.205 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:22.205 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.205 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.205 | 99.00th=[29492], 99.50th=[32113], 99.90th=[41681], 99.95th=[41681], 00:34:22.205 | 99.99th=[41681] 00:34:22.205 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2265.80, stdev=72.50, samples=20 00:34:22.205 iops : min= 513, max= 576, avg=566.45, stdev=18.12, samples=20 00:34:22.205 lat (msec) : 20=0.32%, 50=99.68% 00:34:22.205 cpu : usr=98.65%, sys=0.99%, ctx=20, majf=0, minf=30 00:34:22.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.205 filename1: (groupid=0, jobs=1): err= 0: pid=2522053: Tue Nov 19 11:45:34 2024 00:34:22.205 read: IOPS=566, BW=2265KiB/s (2320kB/s)(22.1MiB/10002msec) 00:34:22.205 slat (nsec): min=6019, max=74493, avg=20453.69, stdev=6779.10 00:34:22.205 clat (usec): min=16428, max=61443, avg=28061.27, stdev=1939.57 00:34:22.205 lat (usec): min=16442, max=61459, avg=28081.72, stdev=1938.82 00:34:22.205 clat percentiles (usec): 00:34:22.205 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.205 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.205 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.205 | 99.00th=[29492], 99.50th=[31851], 99.90th=[61604], 99.95th=[61604], 00:34:22.205 | 99.99th=[61604] 00:34:22.205 bw ( KiB/s): min= 2052, max= 2304, per=4.13%, avg=2257.05, stdev=75.85, samples=19 00:34:22.205 iops : min= 513, max= 576, avg=564.26, stdev=18.96, samples=19 00:34:22.205 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:34:22.205 cpu : usr=98.54%, sys=1.10%, ctx=14, majf=0, minf=31 00:34:22.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.205 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.206 filename1: (groupid=0, jobs=1): err= 0: pid=2522054: Tue Nov 19 11:45:34 2024 00:34:22.206 read: IOPS=567, BW=2270KiB/s (2324kB/s)(22.2MiB/10011msec) 00:34:22.206 slat (nsec): min=4161, max=67646, avg=19643.61, stdev=6981.57 00:34:22.206 clat (usec): min=16282, max=42639, avg=28040.21, stdev=1191.04 00:34:22.206 lat (usec): min=16299, max=42664, avg=28059.85, stdev=1190.09 00:34:22.206 clat percentiles (usec): 00:34:22.206 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:22.206 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.206 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.206 | 99.00th=[29492], 99.50th=[31851], 99.90th=[41681], 99.95th=[42206], 00:34:22.206 | 99.99th=[42730] 00:34:22.206 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2265.80, stdev=72.50, samples=20 00:34:22.206 iops : min= 513, max= 576, avg=566.45, stdev=18.12, samples=20 00:34:22.206 lat (msec) : 20=0.32%, 50=99.68% 00:34:22.206 cpu : usr=98.64%, sys=1.01%, ctx=13, majf=0, minf=28 00:34:22.206 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:22.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.206 filename1: (groupid=0, jobs=1): err= 0: pid=2522055: Tue Nov 19 11:45:34 2024 00:34:22.206 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:34:22.206 slat (nsec): min=7907, max=67345, avg=21748.01, stdev=7041.12 00:34:22.206 clat (usec): min=12176, max=30082, avg=27913.44, stdev=1249.01 00:34:22.206 lat (usec): min=12205, max=30107, avg=27935.19, stdev=1248.57 00:34:22.206 clat percentiles (usec): 00:34:22.206 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.206 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.206 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.206 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:34:22.206 | 99.99th=[30016] 00:34:22.206 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2277.05, stdev=53.61, samples=19 00:34:22.206 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:34:22.206 lat (msec) : 20=0.84%, 50=99.16% 00:34:22.206 cpu : usr=98.43%, sys=1.22%, ctx=16, majf=0, minf=45 00:34:22.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.206 filename1: (groupid=0, jobs=1): err= 0: pid=2522056: Tue Nov 19 11:45:34 2024 00:34:22.206 read: IOPS=567, BW=2270KiB/s (2325kB/s)(22.2MiB/10008msec) 00:34:22.206 slat (nsec): min=8196, max=75599, avg=21440.02, stdev=6984.83 00:34:22.206 clat (usec): min=16388, max=38395, avg=27997.82, stdev=959.17 00:34:22.206 lat (usec): min=16404, max=38408, avg=28019.26, stdev=958.37 00:34:22.206 clat percentiles (usec): 00:34:22.206 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.206 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.206 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.206 | 99.00th=[29492], 99.50th=[31851], 99.90th=[38536], 99.95th=[38536], 00:34:22.206 | 99.99th=[38536] 00:34:22.206 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2263.79, stdev=73.91, samples=19 00:34:22.206 iops : min= 513, max= 576, avg=565.95, stdev=18.48, samples=19 00:34:22.206 lat (msec) : 20=0.28%, 50=99.72% 00:34:22.206 cpu : usr=98.75%, sys=0.90%, ctx=13, majf=0, minf=41 00:34:22.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.206 filename1: (groupid=0, jobs=1): err= 0: pid=2522058: Tue Nov 19 11:45:34 2024 00:34:22.206 read: IOPS=567, BW=2269KiB/s (2323kB/s)(22.2MiB/10014msec) 00:34:22.206 slat (nsec): min=7165, max=86509, avg=28035.85, stdev=17589.79 00:34:22.206 clat (usec): min=18083, max=46952, avg=27960.20, stdev=1579.78 00:34:22.206 lat (usec): min=18091, max=46969, avg=27988.23, stdev=1579.19 00:34:22.206 clat percentiles (usec): 00:34:22.206 | 1.00th=[19268], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:22.206 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.206 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28967], 00:34:22.206 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[38011], 00:34:22.206 | 99.99th=[46924] 00:34:22.206 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2264.65, stdev=72.72, samples=20 00:34:22.206 iops : min= 512, max= 576, avg=566.15, stdev=18.18, samples=20 00:34:22.206 lat (msec) : 20=1.30%, 50=98.70% 00:34:22.206 cpu : usr=98.47%, sys=1.18%, ctx=14, majf=0, minf=51 00:34:22.206 IO depths : 1=5.7%, 2=11.8%, 4=24.6%, 8=51.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:34:22.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.206 filename1: (groupid=0, jobs=1): err= 0: pid=2522059: Tue Nov 19 11:45:34 2024 00:34:22.206 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:34:22.206 slat (nsec): min=7150, max=65076, avg=15588.76, stdev=7638.15 00:34:22.206 clat (usec): min=12130, max=40316, avg=27970.44, stdev=1515.43 00:34:22.206 lat (usec): min=12162, max=40333, avg=27986.03, stdev=1514.64 00:34:22.206 clat percentiles (usec): 00:34:22.206 | 1.00th=[18482], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:22.206 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:34:22.206 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.206 | 99.00th=[29754], 99.50th=[30016], 99.90th=[39060], 99.95th=[39060], 00:34:22.206 | 99.99th=[40109] 00:34:22.206 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2277.05, stdev=53.61, samples=19 00:34:22.206 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:34:22.206 lat (msec) : 20=1.19%, 50=98.81% 00:34:22.206 cpu : usr=98.37%, sys=1.28%, ctx=15, majf=0, minf=61 00:34:22.206 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:22.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.206 filename1: (groupid=0, jobs=1): err= 0: pid=2522060: Tue Nov 19 11:45:34 2024 00:34:22.206 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:34:22.206 slat (nsec): min=7672, max=68261, avg=21940.47, stdev=6569.49 00:34:22.206 clat (usec): min=12206, max=30143, avg=27901.10, stdev=1241.25 00:34:22.206 lat (usec): min=12235, max=30164, avg=27923.04, stdev=1241.25 00:34:22.206 clat percentiles (usec): 00:34:22.206 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.206 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.206 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.206 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:34:22.206 | 99.99th=[30016] 00:34:22.206 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2277.05, stdev=53.61, samples=19 00:34:22.206 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:34:22.206 lat (msec) : 20=0.84%, 50=99.16% 00:34:22.206 cpu : usr=98.35%, sys=1.29%, ctx=14, majf=0, minf=37 00:34:22.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.206 filename1: (groupid=0, jobs=1): err= 0: pid=2522061: Tue Nov 19 11:45:34 2024 00:34:22.206 read: IOPS=567, BW=2271KiB/s (2325kB/s)(22.2MiB/10006msec) 00:34:22.206 slat (nsec): min=6816, max=67569, avg=20243.98, stdev=7539.36 00:34:22.206 clat (usec): min=11761, max=42744, avg=27988.58, stdev=1305.77 00:34:22.206 lat (usec): min=11769, max=42764, avg=28008.82, stdev=1306.27 00:34:22.206 clat percentiles (usec): 00:34:22.206 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.206 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.206 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.206 | 99.00th=[29754], 99.50th=[30016], 99.90th=[42730], 99.95th=[42730], 00:34:22.206 | 99.99th=[42730] 00:34:22.206 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2263.58, stdev=74.55, samples=19 00:34:22.206 iops : min= 512, max= 576, avg=565.89, stdev=18.64, samples=19 00:34:22.206 lat (msec) : 20=0.32%, 50=99.68% 00:34:22.206 cpu : usr=98.55%, sys=1.09%, ctx=12, majf=0, minf=44 00:34:22.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.206 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.206 filename2: (groupid=0, jobs=1): err= 0: pid=2522062: Tue Nov 19 11:45:34 2024 00:34:22.206 read: IOPS=568, BW=2273KiB/s (2328kB/s)(22.2MiB/10005msec) 00:34:22.206 slat (nsec): min=7016, max=83684, avg=29119.38, stdev=17277.01 00:34:22.206 clat (usec): min=18016, max=52396, avg=27880.36, stdev=1334.41 00:34:22.206 lat (usec): min=18037, max=52409, avg=27909.48, stdev=1334.45 00:34:22.206 clat percentiles (usec): 00:34:22.206 | 1.00th=[21103], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:22.206 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.206 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.207 | 99.00th=[29492], 99.50th=[31589], 99.90th=[42730], 99.95th=[42730], 00:34:22.207 | 99.99th=[52167] 00:34:22.207 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2266.32, stdev=58.08, samples=19 00:34:22.207 iops : min= 544, max= 576, avg=566.58, stdev=14.52, samples=19 00:34:22.207 lat (msec) : 20=0.91%, 50=99.05%, 100=0.04% 00:34:22.207 cpu : usr=98.46%, sys=1.19%, ctx=13, majf=0, minf=43 00:34:22.207 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:22.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.207 filename2: (groupid=0, jobs=1): err= 0: pid=2522063: Tue Nov 19 11:45:34 2024 00:34:22.207 read: IOPS=580, BW=2323KiB/s (2378kB/s)(22.7MiB/10003msec) 00:34:22.207 slat (nsec): min=6391, max=84844, avg=14976.64, stdev=10670.44 00:34:22.207 clat (usec): min=8128, max=66576, avg=27497.27, stdev=3750.09 00:34:22.207 lat (usec): min=8135, max=66593, avg=27512.25, stdev=3748.80 00:34:22.207 clat percentiles (usec): 00:34:22.207 | 1.00th=[18744], 5.00th=[22676], 10.00th=[22938], 20.00th=[23987], 00:34:22.207 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:34:22.207 | 70.00th=[28181], 80.00th=[28443], 90.00th=[32637], 95.00th=[33162], 00:34:22.207 | 99.00th=[34341], 99.50th=[41157], 99.90th=[51119], 99.95th=[51119], 00:34:22.207 | 99.99th=[66323] 00:34:22.207 bw ( KiB/s): min= 2160, max= 2512, per=4.24%, avg=2315.79, stdev=66.37, samples=19 00:34:22.207 iops : min= 540, max= 628, avg=578.95, stdev=16.59, samples=19 00:34:22.207 lat (msec) : 10=0.10%, 20=3.34%, 50=96.28%, 100=0.28% 00:34:22.207 cpu : usr=98.32%, sys=1.30%, ctx=21, majf=0, minf=50 00:34:22.207 IO depths : 1=0.1%, 2=0.2%, 4=2.7%, 8=80.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:34:22.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 complete : 0=0.0%, 4=89.0%, 8=9.1%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.207 filename2: (groupid=0, jobs=1): err= 0: pid=2522064: Tue Nov 19 11:45:34 2024 00:34:22.207 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:34:22.207 slat (nsec): min=7112, max=64736, avg=21208.74, stdev=6920.46 00:34:22.207 clat (usec): min=12165, max=30178, avg=27922.82, stdev=1249.01 00:34:22.207 lat (usec): min=12198, max=30194, avg=27944.03, stdev=1248.46 00:34:22.207 clat percentiles (usec): 00:34:22.207 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:22.207 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.207 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.207 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30016], 99.95th=[30278], 00:34:22.207 | 99.99th=[30278] 00:34:22.207 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2277.05, stdev=53.61, samples=19 00:34:22.207 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:34:22.207 lat (msec) : 20=0.84%, 50=99.16% 00:34:22.207 cpu : usr=98.60%, sys=1.06%, ctx=13, majf=0, minf=53 00:34:22.207 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.207 filename2: (groupid=0, jobs=1): err= 0: pid=2522065: Tue Nov 19 11:45:34 2024 00:34:22.207 read: IOPS=569, BW=2276KiB/s (2331kB/s)(22.2MiB/10009msec) 00:34:22.207 slat (nsec): min=5987, max=64345, avg=21532.41, stdev=6661.07 00:34:22.207 clat (usec): min=15063, max=30182, avg=27916.24, stdev=1076.38 00:34:22.207 lat (usec): min=15080, max=30202, avg=27937.77, stdev=1077.12 00:34:22.207 clat percentiles (usec): 00:34:22.207 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.207 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.207 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.207 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:34:22.207 | 99.99th=[30278] 00:34:22.207 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2270.32, stdev=57.91, samples=19 00:34:22.207 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:22.207 lat (msec) : 20=0.56%, 50=99.44% 00:34:22.207 cpu : usr=98.57%, sys=1.08%, ctx=14, majf=0, minf=34 00:34:22.207 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.207 filename2: (groupid=0, jobs=1): err= 0: pid=2522066: Tue Nov 19 11:45:34 2024 00:34:22.207 read: IOPS=586, BW=2347KiB/s (2403kB/s)(22.9MiB/10014msec) 00:34:22.207 slat (nsec): min=6804, max=56017, avg=10453.68, stdev=4358.12 00:34:22.207 clat (usec): min=2313, max=36874, avg=27182.93, stdev=4024.07 00:34:22.207 lat (usec): min=2322, max=36882, avg=27193.38, stdev=4023.64 00:34:22.207 clat percentiles (usec): 00:34:22.207 | 1.00th=[ 2769], 5.00th=[20055], 10.00th=[27657], 20.00th=[27919], 00:34:22.207 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:34:22.207 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.207 | 99.00th=[29230], 99.50th=[31589], 99.90th=[36439], 99.95th=[36963], 00:34:22.207 | 99.99th=[36963] 00:34:22.207 bw ( KiB/s): min= 2176, max= 3743, per=4.29%, avg=2343.95, stdev=334.09, samples=20 00:34:22.207 iops : min= 544, max= 935, avg=585.95, stdev=83.36, samples=20 00:34:22.207 lat (msec) : 4=1.36%, 10=0.89%, 20=2.55%, 50=95.20% 00:34:22.207 cpu : usr=98.55%, sys=1.10%, ctx=13, majf=0, minf=43 00:34:22.207 IO depths : 1=5.7%, 2=11.6%, 4=23.7%, 8=52.2%, 16=6.8%, 32=0.0%, >=64=0.0% 00:34:22.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 issued rwts: total=5875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.207 filename2: (groupid=0, jobs=1): err= 0: pid=2522068: Tue Nov 19 11:45:34 2024 00:34:22.207 read: IOPS=567, BW=2269KiB/s (2324kB/s)(22.2MiB/10012msec) 00:34:22.207 slat (nsec): min=4257, max=74407, avg=19063.13, stdev=6798.12 00:34:22.207 clat (usec): min=16340, max=43032, avg=28047.64, stdev=1119.96 00:34:22.207 lat (usec): min=16370, max=43060, avg=28066.70, stdev=1118.66 00:34:22.207 clat percentiles (usec): 00:34:22.207 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:22.207 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.207 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.207 | 99.00th=[29492], 99.50th=[32113], 99.90th=[43254], 99.95th=[43254], 00:34:22.207 | 99.99th=[43254] 00:34:22.207 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2265.60, stdev=73.12, samples=20 00:34:22.207 iops : min= 512, max= 576, avg=566.40, stdev=18.28, samples=20 00:34:22.207 lat (msec) : 20=0.28%, 50=99.72% 00:34:22.207 cpu : usr=98.57%, sys=1.08%, ctx=17, majf=0, minf=56 00:34:22.207 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.207 filename2: (groupid=0, jobs=1): err= 0: pid=2522069: Tue Nov 19 11:45:34 2024 00:34:22.207 read: IOPS=566, BW=2265KiB/s (2319kB/s)(22.1MiB/10004msec) 00:34:22.207 slat (nsec): min=6868, max=85138, avg=28112.91, stdev=17405.88 00:34:22.207 clat (usec): min=3527, max=67197, avg=27969.17, stdev=1574.88 00:34:22.207 lat (usec): min=3562, max=67220, avg=27997.29, stdev=1573.91 00:34:22.207 clat percentiles (usec): 00:34:22.207 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:22.207 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.207 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.207 | 99.00th=[29492], 99.50th=[29754], 99.90th=[51643], 99.95th=[52167], 00:34:22.207 | 99.99th=[67634] 00:34:22.207 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2263.58, stdev=74.55, samples=19 00:34:22.207 iops : min= 512, max= 576, avg=565.89, stdev=18.64, samples=19 00:34:22.207 lat (msec) : 4=0.02%, 20=0.32%, 50=99.38%, 100=0.28% 00:34:22.207 cpu : usr=98.52%, sys=1.12%, ctx=15, majf=0, minf=37 00:34:22.207 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 issued rwts: total=5665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.207 filename2: (groupid=0, jobs=1): err= 0: pid=2522070: Tue Nov 19 11:45:34 2024 00:34:22.207 read: IOPS=597, BW=2390KiB/s (2448kB/s)(23.4MiB/10013msec) 00:34:22.207 slat (usec): min=6, max=252, avg=47.07, stdev=20.11 00:34:22.207 clat (usec): min=11305, max=37860, avg=26392.57, stdev=3244.54 00:34:22.207 lat (usec): min=11551, max=37910, avg=26439.64, stdev=3239.87 00:34:22.207 clat percentiles (usec): 00:34:22.207 | 1.00th=[17695], 5.00th=[18482], 10.00th=[19268], 20.00th=[27132], 00:34:22.207 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:34:22.207 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:34:22.207 | 99.00th=[30278], 99.50th=[31589], 99.90th=[36439], 99.95th=[37487], 00:34:22.207 | 99.99th=[38011] 00:34:22.207 bw ( KiB/s): min= 2176, max= 3088, per=4.37%, avg=2387.20, stdev=283.41, samples=20 00:34:22.207 iops : min= 544, max= 772, avg=596.80, stdev=70.85, samples=20 00:34:22.207 lat (msec) : 20=12.90%, 50=87.10% 00:34:22.207 cpu : usr=98.63%, sys=0.98%, ctx=16, majf=0, minf=48 00:34:22.207 IO depths : 1=5.1%, 2=10.1%, 4=21.3%, 8=55.9%, 16=7.6%, 32=0.0%, >=64=0.0% 00:34:22.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.207 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.208 00:34:22.208 Run status group 0 (all jobs): 00:34:22.208 READ: bw=53.3MiB/s (55.9MB/s), 2256KiB/s-2390KiB/s (2310kB/s-2448kB/s), io=536MiB (562MB), run=10001-10044msec 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 bdev_null0 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 [2024-11-19 11:45:34.750223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 bdev_null1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:22.208 { 00:34:22.208 "params": { 00:34:22.208 "name": "Nvme$subsystem", 00:34:22.208 "trtype": "$TEST_TRANSPORT", 00:34:22.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.208 "adrfam": "ipv4", 00:34:22.208 "trsvcid": "$NVMF_PORT", 00:34:22.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.208 "hdgst": ${hdgst:-false}, 00:34:22.208 "ddgst": ${ddgst:-false} 00:34:22.208 }, 00:34:22.208 "method": "bdev_nvme_attach_controller" 00:34:22.208 } 00:34:22.208 EOF 00:34:22.208 )") 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:22.208 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:22.209 { 00:34:22.209 "params": { 00:34:22.209 "name": "Nvme$subsystem", 00:34:22.209 "trtype": "$TEST_TRANSPORT", 00:34:22.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.209 "adrfam": "ipv4", 00:34:22.209 "trsvcid": "$NVMF_PORT", 00:34:22.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.209 "hdgst": ${hdgst:-false}, 00:34:22.209 "ddgst": ${ddgst:-false} 00:34:22.209 }, 00:34:22.209 "method": "bdev_nvme_attach_controller" 00:34:22.209 } 00:34:22.209 EOF 00:34:22.209 )") 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:22.209 "params": { 00:34:22.209 "name": "Nvme0", 00:34:22.209 "trtype": "tcp", 00:34:22.209 "traddr": "10.0.0.2", 00:34:22.209 "adrfam": "ipv4", 00:34:22.209 "trsvcid": "4420", 00:34:22.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:22.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:22.209 "hdgst": false, 00:34:22.209 "ddgst": false 00:34:22.209 }, 00:34:22.209 "method": "bdev_nvme_attach_controller" 00:34:22.209 },{ 00:34:22.209 "params": { 00:34:22.209 "name": "Nvme1", 00:34:22.209 "trtype": "tcp", 00:34:22.209 "traddr": "10.0.0.2", 00:34:22.209 "adrfam": "ipv4", 00:34:22.209 "trsvcid": "4420", 00:34:22.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:22.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:22.209 "hdgst": false, 00:34:22.209 "ddgst": false 00:34:22.209 }, 00:34:22.209 "method": "bdev_nvme_attach_controller" 00:34:22.209 }' 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:22.209 11:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.209 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:22.209 ... 00:34:22.209 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:22.209 ... 00:34:22.209 fio-3.35 00:34:22.209 Starting 4 threads 00:34:27.477 00:34:27.477 filename0: (groupid=0, jobs=1): err= 0: pid=2524014: Tue Nov 19 11:45:40 2024 00:34:27.477 read: IOPS=2749, BW=21.5MiB/s (22.5MB/s)(107MiB/5002msec) 00:34:27.477 slat (nsec): min=6127, max=49156, avg=9784.61, stdev=3872.32 00:34:27.477 clat (usec): min=585, max=5579, avg=2875.53, stdev=395.71 00:34:27.477 lat (usec): min=597, max=5592, avg=2885.32, stdev=396.17 00:34:27.477 clat percentiles (usec): 00:34:27.477 | 1.00th=[ 1729], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2540], 00:34:27.477 | 30.00th=[ 2737], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3032], 00:34:27.477 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3195], 95.00th=[ 3359], 00:34:27.477 | 99.00th=[ 3949], 99.50th=[ 4113], 99.90th=[ 5080], 99.95th=[ 5342], 00:34:27.477 | 99.99th=[ 5538] 00:34:27.477 bw ( KiB/s): min=20912, max=23232, per=26.49%, avg=22035.56, stdev=703.22, samples=9 00:34:27.477 iops : min= 2614, max= 2904, avg=2754.44, stdev=87.90, samples=9 00:34:27.477 lat (usec) : 750=0.01%, 1000=0.12% 00:34:27.477 lat (msec) : 2=1.71%, 4=97.24%, 10=0.92% 00:34:27.477 cpu : usr=95.78%, sys=3.90%, ctx=8, majf=0, minf=9 00:34:27.477 IO depths : 1=1.3%, 2=12.5%, 4=60.9%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.477 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.477 issued rwts: total=13755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.477 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:27.477 filename0: (groupid=0, jobs=1): err= 0: pid=2524015: Tue Nov 19 11:45:40 2024 00:34:27.477 read: IOPS=2528, BW=19.8MiB/s (20.7MB/s)(98.8MiB/5001msec) 00:34:27.477 slat (nsec): min=6098, max=50708, avg=10205.01, stdev=4233.55 00:34:27.477 clat (usec): min=677, max=6035, avg=3130.69, stdev=388.92 00:34:27.477 lat (usec): min=688, max=6048, avg=3140.90, stdev=388.64 00:34:27.477 clat percentiles (usec): 00:34:27.477 | 1.00th=[ 2212], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 2999], 00:34:27.477 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:34:27.477 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3523], 95.00th=[ 3752], 00:34:27.477 | 99.00th=[ 4621], 99.50th=[ 5014], 99.90th=[ 5473], 99.95th=[ 5473], 00:34:27.477 | 99.99th=[ 5669] 00:34:27.477 bw ( KiB/s): min=19168, max=21056, per=24.17%, avg=20110.22, stdev=562.08, samples=9 00:34:27.477 iops : min= 2396, max= 2632, avg=2513.78, stdev=70.26, samples=9 00:34:27.477 lat (usec) : 750=0.05%, 1000=0.02% 00:34:27.477 lat (msec) : 2=0.45%, 4=96.18%, 10=3.30% 00:34:27.477 cpu : usr=95.80%, sys=3.90%, ctx=10, majf=0, minf=9 00:34:27.477 IO depths : 1=0.8%, 2=6.6%, 4=66.4%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.477 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.477 issued rwts: total=12646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.477 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:27.477 filename1: (groupid=0, jobs=1): err= 0: pid=2524017: Tue Nov 19 11:45:40 2024 00:34:27.477 read: IOPS=2517, BW=19.7MiB/s (20.6MB/s)(98.4MiB/5001msec) 00:34:27.477 slat (nsec): min=6146, max=40376, avg=10063.52, stdev=4150.14 00:34:27.477 clat (usec): min=559, max=5881, avg=3146.63, stdev=417.51 00:34:27.477 lat (usec): min=570, max=5888, avg=3156.69, stdev=417.23 00:34:27.477 clat percentiles (usec): 00:34:27.477 | 1.00th=[ 2212], 5.00th=[ 2704], 10.00th=[ 2868], 20.00th=[ 2999], 00:34:27.477 | 30.00th=[ 3032], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:34:27.477 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3556], 95.00th=[ 3818], 00:34:27.477 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 5604], 99.95th=[ 5800], 00:34:27.477 | 99.99th=[ 5866] 00:34:27.477 bw ( KiB/s): min=19328, max=20752, per=24.11%, avg=20059.56, stdev=432.39, samples=9 00:34:27.477 iops : min= 2416, max= 2594, avg=2507.44, stdev=54.05, samples=9 00:34:27.477 lat (usec) : 750=0.06%, 1000=0.09% 00:34:27.477 lat (msec) : 2=0.60%, 4=95.42%, 10=3.84% 00:34:27.477 cpu : usr=95.44%, sys=4.26%, ctx=9, majf=0, minf=9 00:34:27.477 IO depths : 1=0.3%, 2=5.4%, 4=67.2%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.477 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.477 issued rwts: total=12590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.477 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:27.477 filename1: (groupid=0, jobs=1): err= 0: pid=2524018: Tue Nov 19 11:45:40 2024 00:34:27.477 read: IOPS=2604, BW=20.4MiB/s (21.3MB/s)(102MiB/5001msec) 00:34:27.477 slat (usec): min=6, max=183, avg=10.33, stdev= 4.47 00:34:27.477 clat (usec): min=659, max=5384, avg=3036.96, stdev=382.45 00:34:27.477 lat (usec): min=670, max=5395, avg=3047.29, stdev=382.39 00:34:27.477 clat percentiles (usec): 00:34:27.477 | 1.00th=[ 2040], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2868], 00:34:27.477 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:34:27.477 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3326], 95.00th=[ 3687], 00:34:27.477 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 5080], 99.95th=[ 5145], 00:34:27.477 | 99.99th=[ 5276] 00:34:27.477 bw ( KiB/s): min=20032, max=21632, per=25.07%, avg=20860.44, stdev=522.14, samples=9 00:34:27.477 iops : min= 2504, max= 2704, avg=2607.56, stdev=65.27, samples=9 00:34:27.477 lat (usec) : 750=0.02%, 1000=0.04% 00:34:27.477 lat (msec) : 2=0.85%, 4=96.76%, 10=2.33% 00:34:27.477 cpu : usr=95.82%, sys=3.86%, ctx=7, majf=0, minf=9 00:34:27.477 IO depths : 1=1.0%, 2=8.5%, 4=64.9%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.477 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.477 issued rwts: total=13027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.477 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:27.477 00:34:27.477 Run status group 0 (all jobs): 00:34:27.477 READ: bw=81.2MiB/s (85.2MB/s), 19.7MiB/s-21.5MiB/s (20.6MB/s-22.5MB/s), io=406MiB (426MB), run=5001-5002msec 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.477 00:34:27.477 real 0m24.577s 00:34:27.477 user 4m52.336s 00:34:27.477 sys 0m5.218s 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.477 11:45:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:27.477 ************************************ 00:34:27.477 END TEST fio_dif_rand_params 00:34:27.477 ************************************ 00:34:27.477 11:45:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:27.477 11:45:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:27.477 11:45:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:27.477 11:45:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.477 ************************************ 00:34:27.477 START TEST fio_dif_digest 00:34:27.477 ************************************ 00:34:27.477 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:27.477 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:27.477 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:27.477 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:27.477 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:27.478 bdev_null0 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:27.478 [2024-11-19 11:45:41.248805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:27.478 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.737 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:27.737 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:27.737 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:27.737 11:45:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:27.737 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:27.737 11:45:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:27.737 11:45:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:27.737 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:27.737 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:27.737 11:45:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:27.737 { 00:34:27.737 "params": { 00:34:27.737 "name": "Nvme$subsystem", 00:34:27.737 "trtype": "$TEST_TRANSPORT", 00:34:27.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:27.737 "adrfam": "ipv4", 00:34:27.737 "trsvcid": "$NVMF_PORT", 00:34:27.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:27.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:27.738 "hdgst": ${hdgst:-false}, 00:34:27.738 "ddgst": ${ddgst:-false} 00:34:27.738 }, 00:34:27.738 "method": "bdev_nvme_attach_controller" 00:34:27.738 } 00:34:27.738 EOF 00:34:27.738 )") 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:27.738 "params": { 00:34:27.738 "name": "Nvme0", 00:34:27.738 "trtype": "tcp", 00:34:27.738 "traddr": "10.0.0.2", 00:34:27.738 "adrfam": "ipv4", 00:34:27.738 "trsvcid": "4420", 00:34:27.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:27.738 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:27.738 "hdgst": true, 00:34:27.738 "ddgst": true 00:34:27.738 }, 00:34:27.738 "method": "bdev_nvme_attach_controller" 00:34:27.738 }' 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:27.738 11:45:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:27.997 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:27.997 ... 00:34:27.997 fio-3.35 00:34:27.997 Starting 3 threads 00:34:40.203 00:34:40.203 filename0: (groupid=0, jobs=1): err= 0: pid=2525202: Tue Nov 19 11:45:52 2024 00:34:40.203 read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(380MiB/10044msec) 00:34:40.203 slat (nsec): min=6631, max=97674, avg=18167.34, stdev=6381.84 00:34:40.203 clat (usec): min=7523, max=51521, avg=9884.45, stdev=1230.06 00:34:40.203 lat (usec): min=7546, max=51530, avg=9902.62, stdev=1229.79 00:34:40.203 clat percentiles (usec): 00:34:40.203 | 1.00th=[ 8291], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:34:40.203 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:34:40.203 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:34:40.203 | 99.00th=[11469], 99.50th=[11731], 99.90th=[12387], 99.95th=[47449], 00:34:40.203 | 99.99th=[51643] 00:34:40.203 bw ( KiB/s): min=37888, max=40192, per=36.02%, avg=38860.80, stdev=646.57, samples=20 00:34:40.203 iops : min= 296, max= 314, avg=303.60, stdev= 5.05, samples=20 00:34:40.203 lat (msec) : 10=57.50%, 20=42.43%, 50=0.03%, 100=0.03% 00:34:40.203 cpu : usr=95.53%, sys=4.14%, ctx=35, majf=0, minf=123 00:34:40.203 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.203 issued rwts: total=3038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.203 filename0: (groupid=0, jobs=1): err= 0: pid=2525203: Tue Nov 19 11:45:52 2024 00:34:40.203 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(332MiB/10045msec) 00:34:40.203 slat (nsec): min=6465, max=46227, avg=16292.66, stdev=7090.66 00:34:40.203 clat (usec): min=8860, max=50074, avg=11312.03, stdev=1265.74 00:34:40.203 lat (usec): min=8872, max=50086, avg=11328.33, stdev=1266.00 00:34:40.203 clat percentiles (usec): 00:34:40.203 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:34:40.203 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:34:40.203 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:34:40.203 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13829], 99.95th=[45351], 00:34:40.203 | 99.99th=[50070] 00:34:40.203 bw ( KiB/s): min=33024, max=35072, per=31.49%, avg=33971.20, stdev=638.52, samples=20 00:34:40.203 iops : min= 258, max= 274, avg=265.40, stdev= 4.99, samples=20 00:34:40.203 lat (msec) : 10=4.52%, 20=95.41%, 50=0.04%, 100=0.04% 00:34:40.203 cpu : usr=96.69%, sys=3.00%, ctx=16, majf=0, minf=42 00:34:40.203 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.203 issued rwts: total=2656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.203 filename0: (groupid=0, jobs=1): err= 0: pid=2525204: Tue Nov 19 11:45:52 2024 00:34:40.203 read: IOPS=275, BW=34.5MiB/s (36.2MB/s)(347MiB/10045msec) 00:34:40.203 slat (nsec): min=6548, max=65149, avg=16365.98, stdev=7272.20 00:34:40.203 clat (usec): min=7075, max=48210, avg=10837.90, stdev=1224.26 00:34:40.203 lat (usec): min=7087, max=48228, avg=10854.27, stdev=1224.47 00:34:40.203 clat percentiles (usec): 00:34:40.203 | 1.00th=[ 9241], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:34:40.203 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:34:40.203 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:34:40.203 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13173], 99.95th=[45351], 00:34:40.203 | 99.99th=[47973] 00:34:40.203 bw ( KiB/s): min=34560, max=36864, per=32.87%, avg=35456.00, stdev=494.87, samples=20 00:34:40.203 iops : min= 270, max= 288, avg=277.00, stdev= 3.87, samples=20 00:34:40.203 lat (msec) : 10=13.85%, 20=86.08%, 50=0.07% 00:34:40.203 cpu : usr=96.89%, sys=2.81%, ctx=21, majf=0, minf=89 00:34:40.203 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.203 issued rwts: total=2772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.203 00:34:40.203 Run status group 0 (all jobs): 00:34:40.203 READ: bw=105MiB/s (110MB/s), 33.1MiB/s-37.8MiB/s (34.7MB/s-39.6MB/s), io=1058MiB (1110MB), run=10044-10045msec 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.203 00:34:40.203 real 0m11.238s 00:34:40.203 user 0m35.891s 00:34:40.203 sys 0m1.368s 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.203 11:45:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:40.203 ************************************ 00:34:40.203 END TEST fio_dif_digest 00:34:40.203 ************************************ 00:34:40.203 11:45:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:40.203 11:45:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:40.203 rmmod nvme_tcp 00:34:40.203 rmmod nvme_fabrics 00:34:40.203 rmmod nvme_keyring 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2516130 ']' 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2516130 00:34:40.203 11:45:52 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2516130 ']' 00:34:40.203 11:45:52 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2516130 00:34:40.203 11:45:52 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:40.203 11:45:52 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:40.203 11:45:52 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2516130 00:34:40.203 11:45:52 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:40.203 11:45:52 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:40.203 11:45:52 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2516130' 00:34:40.203 killing process with pid 2516130 00:34:40.203 11:45:52 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2516130 00:34:40.203 11:45:52 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2516130 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:40.203 11:45:52 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:42.112 Waiting for block devices as requested 00:34:42.112 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:42.112 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:42.112 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:42.112 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:42.112 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:42.371 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:42.371 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:42.371 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:42.631 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:42.631 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:42.631 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:42.631 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:42.890 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:42.890 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:42.890 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:43.149 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:43.149 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:43.149 11:45:56 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:43.149 11:45:56 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:43.149 11:45:56 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:43.149 11:45:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:43.149 11:45:56 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:43.149 11:45:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:43.149 11:45:56 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:43.149 11:45:56 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:43.149 11:45:56 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.149 11:45:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:43.149 11:45:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.690 11:45:58 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:45.690 00:34:45.690 real 1m14.425s 00:34:45.690 user 7m10.958s 00:34:45.690 sys 0m20.204s 00:34:45.690 11:45:58 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.690 11:45:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.690 ************************************ 00:34:45.690 END TEST nvmf_dif 00:34:45.690 ************************************ 00:34:45.690 11:45:58 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:45.690 11:45:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:45.690 11:45:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:45.690 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:34:45.690 ************************************ 00:34:45.690 START TEST nvmf_abort_qd_sizes 00:34:45.690 ************************************ 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:45.690 * Looking for test storage... 00:34:45.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:45.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.690 --rc genhtml_branch_coverage=1 00:34:45.690 --rc genhtml_function_coverage=1 00:34:45.690 --rc genhtml_legend=1 00:34:45.690 --rc geninfo_all_blocks=1 00:34:45.690 --rc geninfo_unexecuted_blocks=1 00:34:45.690 00:34:45.690 ' 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:45.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.690 --rc genhtml_branch_coverage=1 00:34:45.690 --rc genhtml_function_coverage=1 00:34:45.690 --rc genhtml_legend=1 00:34:45.690 --rc geninfo_all_blocks=1 00:34:45.690 --rc geninfo_unexecuted_blocks=1 00:34:45.690 00:34:45.690 ' 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:45.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.690 --rc genhtml_branch_coverage=1 00:34:45.690 --rc genhtml_function_coverage=1 00:34:45.690 --rc genhtml_legend=1 00:34:45.690 --rc geninfo_all_blocks=1 00:34:45.690 --rc geninfo_unexecuted_blocks=1 00:34:45.690 00:34:45.690 ' 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:45.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.690 --rc genhtml_branch_coverage=1 00:34:45.690 --rc genhtml_function_coverage=1 00:34:45.690 --rc genhtml_legend=1 00:34:45.690 --rc geninfo_all_blocks=1 00:34:45.690 --rc geninfo_unexecuted_blocks=1 00:34:45.690 00:34:45.690 ' 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.690 11:45:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:45.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:45.691 11:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:52.281 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:52.281 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:52.281 Found net devices under 0000:86:00.0: cvl_0_0 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:52.281 Found net devices under 0000:86:00.1: cvl_0_1 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:52.281 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:52.282 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:52.282 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:52.282 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:52.282 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:52.282 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:52.282 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:52.282 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:52.282 11:46:04 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:52.282 11:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:52.282 11:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:52.282 11:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:52.282 11:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:52.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:52.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:34:52.282 00:34:52.282 --- 10.0.0.2 ping statistics --- 00:34:52.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.282 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:34:52.282 11:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:52.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:52.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:34:52.282 00:34:52.282 --- 10.0.0.1 ping statistics --- 00:34:52.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.282 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:34:52.282 11:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:52.282 11:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:52.282 11:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:52.282 11:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:54.190 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:54.190 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:54.190 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:54.190 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:54.191 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:54.191 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:54.191 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:54.191 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:54.191 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:54.191 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:54.191 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:54.450 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:54.450 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:54.450 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:54.450 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:54.450 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:55.019 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2532991 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2532991 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2532991 ']' 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:55.279 11:46:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:55.279 [2024-11-19 11:46:09.001242] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:34:55.279 [2024-11-19 11:46:09.001287] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.538 [2024-11-19 11:46:09.084072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:55.538 [2024-11-19 11:46:09.131023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.538 [2024-11-19 11:46:09.131057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.538 [2024-11-19 11:46:09.131064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.538 [2024-11-19 11:46:09.131072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.538 [2024-11-19 11:46:09.131077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.538 [2024-11-19 11:46:09.132445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.538 [2024-11-19 11:46:09.132553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:55.538 [2024-11-19 11:46:09.132660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.538 [2024-11-19 11:46:09.132661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:56.103 11:46:09 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:56.361 11:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:56.361 ************************************ 00:34:56.361 START TEST spdk_target_abort 00:34:56.361 ************************************ 00:34:56.361 11:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:56.361 11:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:56.361 11:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:56.361 11:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.361 11:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.645 spdk_targetn1 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.645 [2024-11-19 11:46:12.763968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.645 [2024-11-19 11:46:12.798018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:59.645 11:46:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:02.927 Initializing NVMe Controllers 00:35:02.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:02.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:02.927 Initialization complete. Launching workers. 00:35:02.927 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 18048, failed: 0 00:35:02.927 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1400, failed to submit 16648 00:35:02.927 success 800, unsuccessful 600, failed 0 00:35:02.927 11:46:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:02.927 11:46:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:06.209 Initializing NVMe Controllers 00:35:06.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:06.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:06.209 Initialization complete. Launching workers. 00:35:06.209 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8530, failed: 0 00:35:06.209 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1271, failed to submit 7259 00:35:06.209 success 334, unsuccessful 937, failed 0 00:35:06.209 11:46:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:06.209 11:46:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:08.853 Initializing NVMe Controllers 00:35:08.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:08.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:08.853 Initialization complete. Launching workers. 00:35:08.853 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37747, failed: 0 00:35:08.853 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2837, failed to submit 34910 00:35:08.853 success 590, unsuccessful 2247, failed 0 00:35:08.853 11:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:08.853 11:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.853 11:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.853 11:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.853 11:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:08.853 11:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.853 11:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:10.229 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.229 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2532991 00:35:10.229 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2532991 ']' 00:35:10.229 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2532991 00:35:10.229 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:10.229 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.229 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2532991 00:35:10.230 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:10.230 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:10.230 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2532991' 00:35:10.230 killing process with pid 2532991 00:35:10.230 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2532991 00:35:10.230 11:46:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2532991 00:35:10.489 00:35:10.489 real 0m14.110s 00:35:10.489 user 0m56.119s 00:35:10.489 sys 0m2.672s 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:10.489 ************************************ 00:35:10.489 END TEST spdk_target_abort 00:35:10.489 ************************************ 00:35:10.489 11:46:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:10.489 11:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:10.489 11:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.489 11:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:10.489 ************************************ 00:35:10.489 START TEST kernel_target_abort 00:35:10.489 ************************************ 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:10.489 11:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:13.029 Waiting for block devices as requested 00:35:13.288 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:13.288 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:13.547 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:13.547 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:13.547 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:13.547 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:13.806 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:13.806 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:13.806 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:14.065 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:14.065 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:14.065 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:14.065 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:14.324 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:14.324 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:14.324 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:14.583 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:14.583 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:14.583 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:14.583 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:14.583 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:14.584 No valid GPT data, bailing 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:14.584 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:14.843 00:35:14.843 Discovery Log Number of Records 2, Generation counter 2 00:35:14.843 =====Discovery Log Entry 0====== 00:35:14.843 trtype: tcp 00:35:14.843 adrfam: ipv4 00:35:14.843 subtype: current discovery subsystem 00:35:14.843 treq: not specified, sq flow control disable supported 00:35:14.843 portid: 1 00:35:14.843 trsvcid: 4420 00:35:14.843 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:14.843 traddr: 10.0.0.1 00:35:14.843 eflags: none 00:35:14.843 sectype: none 00:35:14.843 =====Discovery Log Entry 1====== 00:35:14.843 trtype: tcp 00:35:14.843 adrfam: ipv4 00:35:14.843 subtype: nvme subsystem 00:35:14.843 treq: not specified, sq flow control disable supported 00:35:14.843 portid: 1 00:35:14.843 trsvcid: 4420 00:35:14.843 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:14.843 traddr: 10.0.0.1 00:35:14.843 eflags: none 00:35:14.843 sectype: none 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.843 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:14.844 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:14.844 11:46:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:18.130 Initializing NVMe Controllers 00:35:18.130 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:18.130 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:18.130 Initialization complete. Launching workers. 00:35:18.130 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92227, failed: 0 00:35:18.130 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92227, failed to submit 0 00:35:18.130 success 0, unsuccessful 92227, failed 0 00:35:18.130 11:46:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:18.130 11:46:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:21.411 Initializing NVMe Controllers 00:35:21.411 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:21.411 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:21.411 Initialization complete. Launching workers. 00:35:21.411 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146564, failed: 0 00:35:21.411 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36710, failed to submit 109854 00:35:21.411 success 0, unsuccessful 36710, failed 0 00:35:21.411 11:46:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:21.411 11:46:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:24.696 Initializing NVMe Controllers 00:35:24.696 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:24.696 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:24.696 Initialization complete. Launching workers. 00:35:24.696 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138072, failed: 0 00:35:24.696 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34586, failed to submit 103486 00:35:24.696 success 0, unsuccessful 34586, failed 0 00:35:24.696 11:46:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:24.696 11:46:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:24.696 11:46:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:24.696 11:46:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:24.696 11:46:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:24.696 11:46:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:24.696 11:46:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:24.696 11:46:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:24.696 11:46:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:24.696 11:46:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:27.234 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:27.234 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:27.803 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:28.062 00:35:28.062 real 0m17.524s 00:35:28.062 user 0m9.210s 00:35:28.062 sys 0m5.047s 00:35:28.062 11:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.062 11:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:28.062 ************************************ 00:35:28.062 END TEST kernel_target_abort 00:35:28.062 ************************************ 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:28.062 rmmod nvme_tcp 00:35:28.062 rmmod nvme_fabrics 00:35:28.062 rmmod nvme_keyring 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2532991 ']' 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2532991 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2532991 ']' 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2532991 00:35:28.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2532991) - No such process 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2532991 is not found' 00:35:28.062 Process with pid 2532991 is not found 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:28.062 11:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:31.355 Waiting for block devices as requested 00:35:31.355 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:31.355 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:31.355 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:31.355 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:31.355 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:31.355 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:31.355 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:31.355 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:31.614 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:31.615 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:31.615 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:31.615 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:31.874 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:31.874 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:31.874 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:32.133 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:32.133 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:32.133 11:46:45 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:32.133 11:46:45 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:32.133 11:46:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:32.133 11:46:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:32.133 11:46:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:32.133 11:46:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:32.133 11:46:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:32.133 11:46:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:32.133 11:46:45 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.133 11:46:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:32.133 11:46:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.668 11:46:47 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:34.668 00:35:34.668 real 0m48.926s 00:35:34.668 user 1m9.833s 00:35:34.668 sys 0m16.507s 00:35:34.668 11:46:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.668 11:46:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.668 ************************************ 00:35:34.668 END TEST nvmf_abort_qd_sizes 00:35:34.668 ************************************ 00:35:34.668 11:46:47 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:34.668 11:46:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:34.668 11:46:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.668 11:46:47 -- common/autotest_common.sh@10 -- # set +x 00:35:34.668 ************************************ 00:35:34.668 START TEST keyring_file 00:35:34.668 ************************************ 00:35:34.668 11:46:48 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:34.668 * Looking for test storage... 00:35:34.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:34.668 11:46:48 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:34.668 11:46:48 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:34.668 11:46:48 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:34.668 11:46:48 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:34.668 11:46:48 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.669 11:46:48 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.669 11:46:48 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.669 11:46:48 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:34.669 11:46:48 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.669 11:46:48 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:34.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.669 --rc genhtml_branch_coverage=1 00:35:34.669 --rc genhtml_function_coverage=1 00:35:34.669 --rc genhtml_legend=1 00:35:34.669 --rc geninfo_all_blocks=1 00:35:34.669 --rc geninfo_unexecuted_blocks=1 00:35:34.669 00:35:34.669 ' 00:35:34.669 11:46:48 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:34.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.669 --rc genhtml_branch_coverage=1 00:35:34.669 --rc genhtml_function_coverage=1 00:35:34.669 --rc genhtml_legend=1 00:35:34.669 --rc geninfo_all_blocks=1 00:35:34.669 --rc geninfo_unexecuted_blocks=1 00:35:34.669 00:35:34.669 ' 00:35:34.669 11:46:48 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:34.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.669 --rc genhtml_branch_coverage=1 00:35:34.669 --rc genhtml_function_coverage=1 00:35:34.669 --rc genhtml_legend=1 00:35:34.669 --rc geninfo_all_blocks=1 00:35:34.669 --rc geninfo_unexecuted_blocks=1 00:35:34.669 00:35:34.669 ' 00:35:34.669 11:46:48 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:34.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.669 --rc genhtml_branch_coverage=1 00:35:34.669 --rc genhtml_function_coverage=1 00:35:34.669 --rc genhtml_legend=1 00:35:34.669 --rc geninfo_all_blocks=1 00:35:34.669 --rc geninfo_unexecuted_blocks=1 00:35:34.669 00:35:34.669 ' 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:34.669 11:46:48 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.669 11:46:48 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.669 11:46:48 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.669 11:46:48 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.669 11:46:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.669 11:46:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.669 11:46:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.669 11:46:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:34.669 11:46:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:34.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fBy5VkHPuB 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fBy5VkHPuB 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fBy5VkHPuB 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.fBy5VkHPuB 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AXU6dLNwg5 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:34.669 11:46:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AXU6dLNwg5 00:35:34.669 11:46:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AXU6dLNwg5 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.AXU6dLNwg5 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=2541781 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:34.669 11:46:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2541781 00:35:34.669 11:46:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2541781 ']' 00:35:34.669 11:46:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.669 11:46:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:34.669 11:46:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.669 11:46:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:34.669 11:46:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:34.669 [2024-11-19 11:46:48.375521] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:35:34.670 [2024-11-19 11:46:48.375575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541781 ] 00:35:34.928 [2024-11-19 11:46:48.452424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.929 [2024-11-19 11:46:48.495442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.929 11:46:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.929 11:46:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:34.929 11:46:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:34.929 11:46:48 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.929 11:46:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.187 [2024-11-19 11:46:48.706693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.187 null0 00:35:35.187 [2024-11-19 11:46:48.738748] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:35.187 [2024-11-19 11:46:48.739062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.187 11:46:48 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.187 [2024-11-19 11:46:48.766816] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:35.187 request: 00:35:35.187 { 00:35:35.187 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:35.187 "secure_channel": false, 00:35:35.187 "listen_address": { 00:35:35.187 "trtype": "tcp", 00:35:35.187 "traddr": "127.0.0.1", 00:35:35.187 "trsvcid": "4420" 00:35:35.187 }, 00:35:35.187 "method": "nvmf_subsystem_add_listener", 00:35:35.187 "req_id": 1 00:35:35.187 } 00:35:35.187 Got JSON-RPC error response 00:35:35.187 response: 00:35:35.187 { 00:35:35.187 "code": -32602, 00:35:35.187 "message": "Invalid parameters" 00:35:35.187 } 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:35.187 11:46:48 keyring_file -- keyring/file.sh@47 -- # bperfpid=2541792 00:35:35.187 11:46:48 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2541792 /var/tmp/bperf.sock 00:35:35.187 11:46:48 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2541792 ']' 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:35.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.187 11:46:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.187 [2024-11-19 11:46:48.822610] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:35:35.187 [2024-11-19 11:46:48.822656] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541792 ] 00:35:35.187 [2024-11-19 11:46:48.897299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.187 [2024-11-19 11:46:48.939805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.446 11:46:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.446 11:46:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:35.446 11:46:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fBy5VkHPuB 00:35:35.446 11:46:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fBy5VkHPuB 00:35:35.446 11:46:49 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AXU6dLNwg5 00:35:35.704 11:46:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AXU6dLNwg5 00:35:35.704 11:46:49 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:35.704 11:46:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:35.704 11:46:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:35.704 11:46:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:35.704 11:46:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:35.960 11:46:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.fBy5VkHPuB == \/\t\m\p\/\t\m\p\.\f\B\y\5\V\k\H\P\u\B ]] 00:35:35.960 11:46:49 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:35.960 11:46:49 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:35.960 11:46:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:35.960 11:46:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:35.960 11:46:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.218 11:46:49 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.AXU6dLNwg5 == \/\t\m\p\/\t\m\p\.\A\X\U\6\d\L\N\w\g\5 ]] 00:35:36.218 11:46:49 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:36.218 11:46:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:36.218 11:46:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.218 11:46:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.218 11:46:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.218 11:46:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.476 11:46:50 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:36.476 11:46:50 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:36.476 11:46:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:36.476 11:46:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.476 11:46:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.476 11:46:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.476 11:46:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.476 11:46:50 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:36.476 11:46:50 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:36.476 11:46:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:36.735 [2024-11-19 11:46:50.406374] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:36.735 nvme0n1 00:35:36.735 11:46:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:36.735 11:46:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:36.735 11:46:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.735 11:46:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.735 11:46:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.735 11:46:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.994 11:46:50 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:36.994 11:46:50 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:36.994 11:46:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:36.994 11:46:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.994 11:46:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.994 11:46:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.994 11:46:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:37.253 11:46:50 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:37.253 11:46:50 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:37.253 Running I/O for 1 seconds... 00:35:38.628 18836.00 IOPS, 73.58 MiB/s 00:35:38.628 Latency(us) 00:35:38.628 [2024-11-19T10:46:52.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.628 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:38.628 nvme0n1 : 1.00 18881.56 73.76 0.00 0.00 6766.59 4302.58 12138.41 00:35:38.628 [2024-11-19T10:46:52.409Z] =================================================================================================================== 00:35:38.628 [2024-11-19T10:46:52.409Z] Total : 18881.56 73.76 0.00 0.00 6766.59 4302.58 12138.41 00:35:38.628 { 00:35:38.628 "results": [ 00:35:38.628 { 00:35:38.628 "job": "nvme0n1", 00:35:38.628 "core_mask": "0x2", 00:35:38.628 "workload": "randrw", 00:35:38.628 "percentage": 50, 00:35:38.628 "status": "finished", 00:35:38.628 "queue_depth": 128, 00:35:38.628 "io_size": 4096, 00:35:38.628 "runtime": 1.004366, 00:35:38.628 "iops": 18881.56309552494, 00:35:38.628 "mibps": 73.7561058418943, 00:35:38.628 "io_failed": 0, 00:35:38.628 "io_timeout": 0, 00:35:38.628 "avg_latency_us": 6766.5891451996, 00:35:38.628 "min_latency_us": 4302.580869565218, 00:35:38.628 "max_latency_us": 12138.40695652174 00:35:38.628 } 00:35:38.628 ], 00:35:38.628 "core_count": 1 00:35:38.628 } 00:35:38.628 11:46:52 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:38.628 11:46:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:38.628 11:46:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:38.628 11:46:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:38.628 11:46:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.628 11:46:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.628 11:46:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:38.628 11:46:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.886 11:46:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:38.886 11:46:52 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:38.886 11:46:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:38.886 11:46:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.886 11:46:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:38.886 11:46:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.887 11:46:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.887 11:46:52 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:38.887 11:46:52 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:38.887 11:46:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:38.887 11:46:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:38.887 11:46:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:38.887 11:46:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:38.887 11:46:52 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:38.887 11:46:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:38.887 11:46:52 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:38.887 11:46:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:39.145 [2024-11-19 11:46:52.811328] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:39.145 [2024-11-19 11:46:52.811834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac2d00 (107): Transport endpoint is not connected 00:35:39.145 [2024-11-19 11:46:52.812829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac2d00 (9): Bad file descriptor 00:35:39.145 [2024-11-19 11:46:52.813831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:39.145 [2024-11-19 11:46:52.813841] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:39.145 [2024-11-19 11:46:52.813848] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:39.145 [2024-11-19 11:46:52.813856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:39.145 request: 00:35:39.145 { 00:35:39.145 "name": "nvme0", 00:35:39.145 "trtype": "tcp", 00:35:39.145 "traddr": "127.0.0.1", 00:35:39.145 "adrfam": "ipv4", 00:35:39.145 "trsvcid": "4420", 00:35:39.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:39.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:39.145 "prchk_reftag": false, 00:35:39.145 "prchk_guard": false, 00:35:39.145 "hdgst": false, 00:35:39.145 "ddgst": false, 00:35:39.145 "psk": "key1", 00:35:39.145 "allow_unrecognized_csi": false, 00:35:39.145 "method": "bdev_nvme_attach_controller", 00:35:39.145 "req_id": 1 00:35:39.145 } 00:35:39.145 Got JSON-RPC error response 00:35:39.145 response: 00:35:39.145 { 00:35:39.145 "code": -5, 00:35:39.145 "message": "Input/output error" 00:35:39.145 } 00:35:39.145 11:46:52 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:39.145 11:46:52 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:39.145 11:46:52 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:39.145 11:46:52 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:39.145 11:46:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:39.145 11:46:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.145 11:46:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:39.145 11:46:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.145 11:46:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.145 11:46:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.403 11:46:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:39.403 11:46:53 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:39.403 11:46:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.403 11:46:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:39.403 11:46:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.403 11:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.403 11:46:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:39.661 11:46:53 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:39.661 11:46:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:39.661 11:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:39.918 11:46:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:39.918 11:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:39.918 11:46:53 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:39.918 11:46:53 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:39.918 11:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.176 11:46:53 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:40.176 11:46:53 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.fBy5VkHPuB 00:35:40.176 11:46:53 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.fBy5VkHPuB 00:35:40.176 11:46:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:40.176 11:46:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.fBy5VkHPuB 00:35:40.176 11:46:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:40.176 11:46:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:40.176 11:46:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:40.176 11:46:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:40.176 11:46:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fBy5VkHPuB 00:35:40.176 11:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fBy5VkHPuB 00:35:40.433 [2024-11-19 11:46:54.030035] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fBy5VkHPuB': 0100660 00:35:40.433 [2024-11-19 11:46:54.030063] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:40.433 request: 00:35:40.433 { 00:35:40.433 "name": "key0", 00:35:40.433 "path": "/tmp/tmp.fBy5VkHPuB", 00:35:40.433 "method": "keyring_file_add_key", 00:35:40.433 "req_id": 1 00:35:40.433 } 00:35:40.433 Got JSON-RPC error response 00:35:40.433 response: 00:35:40.433 { 00:35:40.433 "code": -1, 00:35:40.433 "message": "Operation not permitted" 00:35:40.433 } 00:35:40.433 11:46:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:40.433 11:46:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:40.433 11:46:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:40.433 11:46:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:40.433 11:46:54 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.fBy5VkHPuB 00:35:40.433 11:46:54 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fBy5VkHPuB 00:35:40.433 11:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fBy5VkHPuB 00:35:40.692 11:46:54 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.fBy5VkHPuB 00:35:40.692 11:46:54 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:40.692 11:46:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:40.692 11:46:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:40.692 11:46:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:40.692 11:46:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:40.692 11:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.692 11:46:54 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:40.692 11:46:54 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.692 11:46:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:40.692 11:46:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.692 11:46:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:40.692 11:46:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:40.692 11:46:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:40.692 11:46:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:40.692 11:46:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.692 11:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.951 [2024-11-19 11:46:54.623621] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.fBy5VkHPuB': No such file or directory 00:35:40.951 [2024-11-19 11:46:54.623650] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:40.951 [2024-11-19 11:46:54.623666] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:40.951 [2024-11-19 11:46:54.623674] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:40.951 [2024-11-19 11:46:54.623681] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:40.951 [2024-11-19 11:46:54.623688] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:40.951 request: 00:35:40.951 { 00:35:40.951 "name": "nvme0", 00:35:40.951 "trtype": "tcp", 00:35:40.951 "traddr": "127.0.0.1", 00:35:40.951 "adrfam": "ipv4", 00:35:40.951 "trsvcid": "4420", 00:35:40.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.951 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.951 "prchk_reftag": false, 00:35:40.951 "prchk_guard": false, 00:35:40.951 "hdgst": false, 00:35:40.951 "ddgst": false, 00:35:40.951 "psk": "key0", 00:35:40.951 "allow_unrecognized_csi": false, 00:35:40.951 "method": "bdev_nvme_attach_controller", 00:35:40.951 "req_id": 1 00:35:40.951 } 00:35:40.951 Got JSON-RPC error response 00:35:40.951 response: 00:35:40.951 { 00:35:40.951 "code": -19, 00:35:40.951 "message": "No such device" 00:35:40.951 } 00:35:40.951 11:46:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:40.951 11:46:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:40.951 11:46:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:40.951 11:46:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:40.951 11:46:54 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:40.951 11:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:41.234 11:46:54 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:41.234 11:46:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:41.234 11:46:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:41.234 11:46:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:41.234 11:46:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:41.234 11:46:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:41.234 11:46:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QfNwU4KPMH 00:35:41.234 11:46:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:41.234 11:46:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:41.234 11:46:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:41.234 11:46:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:41.234 11:46:54 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:41.234 11:46:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:41.234 11:46:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:41.234 11:46:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QfNwU4KPMH 00:35:41.234 11:46:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QfNwU4KPMH 00:35:41.234 11:46:54 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.QfNwU4KPMH 00:35:41.234 11:46:54 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QfNwU4KPMH 00:35:41.234 11:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QfNwU4KPMH 00:35:41.534 11:46:55 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:41.534 11:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:41.793 nvme0n1 00:35:41.793 11:46:55 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:41.793 11:46:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:41.793 11:46:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.793 11:46:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.793 11:46:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.793 11:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.793 11:46:55 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:41.793 11:46:55 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:41.793 11:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:42.052 11:46:55 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:42.052 11:46:55 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:42.052 11:46:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.052 11:46:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:42.052 11:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.312 11:46:55 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:42.312 11:46:55 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:42.312 11:46:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:42.312 11:46:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.312 11:46:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.312 11:46:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:42.312 11:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.571 11:46:56 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:42.571 11:46:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:42.571 11:46:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:42.830 11:46:56 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:42.830 11:46:56 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:42.830 11:46:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.830 11:46:56 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:42.830 11:46:56 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QfNwU4KPMH 00:35:42.830 11:46:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QfNwU4KPMH 00:35:43.089 11:46:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AXU6dLNwg5 00:35:43.089 11:46:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AXU6dLNwg5 00:35:43.348 11:46:56 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:43.348 11:46:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:43.607 nvme0n1 00:35:43.607 11:46:57 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:43.608 11:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:43.867 11:46:57 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:43.867 "subsystems": [ 00:35:43.867 { 00:35:43.867 "subsystem": "keyring", 00:35:43.867 "config": [ 00:35:43.867 { 00:35:43.867 "method": "keyring_file_add_key", 00:35:43.867 "params": { 00:35:43.867 "name": "key0", 00:35:43.867 "path": "/tmp/tmp.QfNwU4KPMH" 00:35:43.867 } 00:35:43.867 }, 00:35:43.867 { 00:35:43.867 "method": "keyring_file_add_key", 00:35:43.867 "params": { 00:35:43.867 "name": "key1", 00:35:43.867 "path": "/tmp/tmp.AXU6dLNwg5" 00:35:43.867 } 00:35:43.867 } 00:35:43.867 ] 00:35:43.867 }, 00:35:43.867 { 00:35:43.867 "subsystem": "iobuf", 00:35:43.867 "config": [ 00:35:43.867 { 00:35:43.867 "method": "iobuf_set_options", 00:35:43.867 "params": { 00:35:43.867 "small_pool_count": 8192, 00:35:43.867 "large_pool_count": 1024, 00:35:43.867 "small_bufsize": 8192, 00:35:43.867 "large_bufsize": 135168, 00:35:43.867 "enable_numa": false 00:35:43.867 } 00:35:43.867 } 00:35:43.867 ] 00:35:43.867 }, 00:35:43.867 { 00:35:43.867 "subsystem": "sock", 00:35:43.867 "config": [ 00:35:43.867 { 00:35:43.867 "method": "sock_set_default_impl", 00:35:43.867 "params": { 00:35:43.867 "impl_name": "posix" 00:35:43.867 } 00:35:43.867 }, 00:35:43.867 { 00:35:43.867 "method": "sock_impl_set_options", 00:35:43.867 "params": { 00:35:43.867 "impl_name": "ssl", 00:35:43.867 "recv_buf_size": 4096, 00:35:43.867 "send_buf_size": 4096, 00:35:43.867 "enable_recv_pipe": true, 00:35:43.867 "enable_quickack": false, 00:35:43.867 "enable_placement_id": 0, 00:35:43.867 "enable_zerocopy_send_server": true, 00:35:43.867 "enable_zerocopy_send_client": false, 00:35:43.867 "zerocopy_threshold": 0, 00:35:43.867 "tls_version": 0, 00:35:43.867 "enable_ktls": false 00:35:43.867 } 00:35:43.867 }, 00:35:43.867 { 00:35:43.867 "method": "sock_impl_set_options", 00:35:43.867 "params": { 00:35:43.867 "impl_name": "posix", 00:35:43.867 "recv_buf_size": 2097152, 00:35:43.867 "send_buf_size": 2097152, 00:35:43.867 "enable_recv_pipe": true, 00:35:43.867 "enable_quickack": false, 00:35:43.867 "enable_placement_id": 0, 00:35:43.867 "enable_zerocopy_send_server": true, 00:35:43.867 "enable_zerocopy_send_client": false, 00:35:43.867 "zerocopy_threshold": 0, 00:35:43.867 "tls_version": 0, 00:35:43.867 "enable_ktls": false 00:35:43.867 } 00:35:43.867 } 00:35:43.867 ] 00:35:43.867 }, 00:35:43.867 { 00:35:43.867 "subsystem": "vmd", 00:35:43.867 "config": [] 00:35:43.867 }, 00:35:43.867 { 00:35:43.867 "subsystem": "accel", 00:35:43.867 "config": [ 00:35:43.867 { 00:35:43.867 "method": "accel_set_options", 00:35:43.867 "params": { 00:35:43.867 "small_cache_size": 128, 00:35:43.867 "large_cache_size": 16, 00:35:43.867 "task_count": 2048, 00:35:43.867 "sequence_count": 2048, 00:35:43.867 "buf_count": 2048 00:35:43.867 } 00:35:43.867 } 00:35:43.867 ] 00:35:43.867 }, 00:35:43.867 { 00:35:43.867 "subsystem": "bdev", 00:35:43.867 "config": [ 00:35:43.867 { 00:35:43.867 "method": "bdev_set_options", 00:35:43.867 "params": { 00:35:43.867 "bdev_io_pool_size": 65535, 00:35:43.867 "bdev_io_cache_size": 256, 00:35:43.867 "bdev_auto_examine": true, 00:35:43.868 "iobuf_small_cache_size": 128, 00:35:43.868 "iobuf_large_cache_size": 16 00:35:43.868 } 00:35:43.868 }, 00:35:43.868 { 00:35:43.868 "method": "bdev_raid_set_options", 00:35:43.868 "params": { 00:35:43.868 "process_window_size_kb": 1024, 00:35:43.868 "process_max_bandwidth_mb_sec": 0 00:35:43.868 } 00:35:43.868 }, 00:35:43.868 { 00:35:43.868 "method": "bdev_iscsi_set_options", 00:35:43.868 "params": { 00:35:43.868 "timeout_sec": 30 00:35:43.868 } 00:35:43.868 }, 00:35:43.868 { 00:35:43.868 "method": "bdev_nvme_set_options", 00:35:43.868 "params": { 00:35:43.868 "action_on_timeout": "none", 00:35:43.868 "timeout_us": 0, 00:35:43.868 "timeout_admin_us": 0, 00:35:43.868 "keep_alive_timeout_ms": 10000, 00:35:43.868 "arbitration_burst": 0, 00:35:43.868 "low_priority_weight": 0, 00:35:43.868 "medium_priority_weight": 0, 00:35:43.868 "high_priority_weight": 0, 00:35:43.868 "nvme_adminq_poll_period_us": 10000, 00:35:43.868 "nvme_ioq_poll_period_us": 0, 00:35:43.868 "io_queue_requests": 512, 00:35:43.868 "delay_cmd_submit": true, 00:35:43.868 "transport_retry_count": 4, 00:35:43.868 "bdev_retry_count": 3, 00:35:43.868 "transport_ack_timeout": 0, 00:35:43.868 "ctrlr_loss_timeout_sec": 0, 00:35:43.868 "reconnect_delay_sec": 0, 00:35:43.868 "fast_io_fail_timeout_sec": 0, 00:35:43.868 "disable_auto_failback": false, 00:35:43.868 "generate_uuids": false, 00:35:43.868 "transport_tos": 0, 00:35:43.868 "nvme_error_stat": false, 00:35:43.868 "rdma_srq_size": 0, 00:35:43.868 "io_path_stat": false, 00:35:43.868 "allow_accel_sequence": false, 00:35:43.868 "rdma_max_cq_size": 0, 00:35:43.868 "rdma_cm_event_timeout_ms": 0, 00:35:43.868 "dhchap_digests": [ 00:35:43.868 "sha256", 00:35:43.868 "sha384", 00:35:43.868 "sha512" 00:35:43.868 ], 00:35:43.868 "dhchap_dhgroups": [ 00:35:43.868 "null", 00:35:43.868 "ffdhe2048", 00:35:43.868 "ffdhe3072", 00:35:43.868 "ffdhe4096", 00:35:43.868 "ffdhe6144", 00:35:43.868 "ffdhe8192" 00:35:43.868 ] 00:35:43.868 } 00:35:43.868 }, 00:35:43.868 { 00:35:43.868 "method": "bdev_nvme_attach_controller", 00:35:43.868 "params": { 00:35:43.868 "name": "nvme0", 00:35:43.868 "trtype": "TCP", 00:35:43.868 "adrfam": "IPv4", 00:35:43.868 "traddr": "127.0.0.1", 00:35:43.868 "trsvcid": "4420", 00:35:43.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.868 "prchk_reftag": false, 00:35:43.868 "prchk_guard": false, 00:35:43.868 "ctrlr_loss_timeout_sec": 0, 00:35:43.868 "reconnect_delay_sec": 0, 00:35:43.868 "fast_io_fail_timeout_sec": 0, 00:35:43.868 "psk": "key0", 00:35:43.868 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:43.868 "hdgst": false, 00:35:43.868 "ddgst": false, 00:35:43.868 "multipath": "multipath" 00:35:43.868 } 00:35:43.868 }, 00:35:43.868 { 00:35:43.868 "method": "bdev_nvme_set_hotplug", 00:35:43.868 "params": { 00:35:43.868 "period_us": 100000, 00:35:43.868 "enable": false 00:35:43.868 } 00:35:43.868 }, 00:35:43.868 { 00:35:43.868 "method": "bdev_wait_for_examine" 00:35:43.868 } 00:35:43.868 ] 00:35:43.868 }, 00:35:43.868 { 00:35:43.868 "subsystem": "nbd", 00:35:43.868 "config": [] 00:35:43.868 } 00:35:43.868 ] 00:35:43.868 }' 00:35:43.868 11:46:57 keyring_file -- keyring/file.sh@115 -- # killprocess 2541792 00:35:43.868 11:46:57 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2541792 ']' 00:35:43.868 11:46:57 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2541792 00:35:43.868 11:46:57 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:43.868 11:46:57 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:43.868 11:46:57 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541792 00:35:43.868 11:46:57 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:43.868 11:46:57 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:43.868 11:46:57 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541792' 00:35:43.868 killing process with pid 2541792 00:35:43.868 11:46:57 keyring_file -- common/autotest_common.sh@973 -- # kill 2541792 00:35:43.868 Received shutdown signal, test time was about 1.000000 seconds 00:35:43.868 00:35:43.868 Latency(us) 00:35:43.868 [2024-11-19T10:46:57.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.868 [2024-11-19T10:46:57.649Z] =================================================================================================================== 00:35:43.868 [2024-11-19T10:46:57.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:43.868 11:46:57 keyring_file -- common/autotest_common.sh@978 -- # wait 2541792 00:35:44.128 11:46:57 keyring_file -- keyring/file.sh@118 -- # bperfpid=2543328 00:35:44.128 11:46:57 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2543328 /var/tmp/bperf.sock 00:35:44.128 11:46:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2543328 ']' 00:35:44.128 11:46:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:44.128 11:46:57 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:44.128 11:46:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.128 11:46:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:44.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:44.128 11:46:57 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:44.128 "subsystems": [ 00:35:44.128 { 00:35:44.128 "subsystem": "keyring", 00:35:44.128 "config": [ 00:35:44.128 { 00:35:44.128 "method": "keyring_file_add_key", 00:35:44.128 "params": { 00:35:44.128 "name": "key0", 00:35:44.128 "path": "/tmp/tmp.QfNwU4KPMH" 00:35:44.128 } 00:35:44.128 }, 00:35:44.128 { 00:35:44.128 "method": "keyring_file_add_key", 00:35:44.128 "params": { 00:35:44.128 "name": "key1", 00:35:44.128 "path": "/tmp/tmp.AXU6dLNwg5" 00:35:44.128 } 00:35:44.128 } 00:35:44.128 ] 00:35:44.128 }, 00:35:44.128 { 00:35:44.128 "subsystem": "iobuf", 00:35:44.128 "config": [ 00:35:44.128 { 00:35:44.128 "method": "iobuf_set_options", 00:35:44.128 "params": { 00:35:44.128 "small_pool_count": 8192, 00:35:44.128 "large_pool_count": 1024, 00:35:44.128 "small_bufsize": 8192, 00:35:44.128 "large_bufsize": 135168, 00:35:44.128 "enable_numa": false 00:35:44.128 } 00:35:44.128 } 00:35:44.128 ] 00:35:44.128 }, 00:35:44.128 { 00:35:44.128 "subsystem": "sock", 00:35:44.128 "config": [ 00:35:44.128 { 00:35:44.128 "method": "sock_set_default_impl", 00:35:44.128 "params": { 00:35:44.128 "impl_name": "posix" 00:35:44.128 } 00:35:44.128 }, 00:35:44.128 { 00:35:44.128 "method": "sock_impl_set_options", 00:35:44.128 "params": { 00:35:44.128 "impl_name": "ssl", 00:35:44.128 "recv_buf_size": 4096, 00:35:44.128 "send_buf_size": 4096, 00:35:44.128 "enable_recv_pipe": true, 00:35:44.128 "enable_quickack": false, 00:35:44.128 "enable_placement_id": 0, 00:35:44.128 "enable_zerocopy_send_server": true, 00:35:44.128 "enable_zerocopy_send_client": false, 00:35:44.128 "zerocopy_threshold": 0, 00:35:44.128 "tls_version": 0, 00:35:44.128 "enable_ktls": false 00:35:44.128 } 00:35:44.128 }, 00:35:44.128 { 00:35:44.128 "method": "sock_impl_set_options", 00:35:44.128 "params": { 00:35:44.128 "impl_name": "posix", 00:35:44.128 "recv_buf_size": 2097152, 00:35:44.128 "send_buf_size": 2097152, 00:35:44.128 "enable_recv_pipe": true, 00:35:44.128 "enable_quickack": false, 00:35:44.128 "enable_placement_id": 0, 00:35:44.128 "enable_zerocopy_send_server": true, 00:35:44.128 "enable_zerocopy_send_client": false, 00:35:44.128 "zerocopy_threshold": 0, 00:35:44.128 "tls_version": 0, 00:35:44.128 "enable_ktls": false 00:35:44.128 } 00:35:44.128 } 00:35:44.128 ] 00:35:44.128 }, 00:35:44.128 { 00:35:44.128 "subsystem": "vmd", 00:35:44.128 "config": [] 00:35:44.128 }, 00:35:44.128 { 00:35:44.128 "subsystem": "accel", 00:35:44.128 "config": [ 00:35:44.128 { 00:35:44.128 "method": "accel_set_options", 00:35:44.128 "params": { 00:35:44.128 "small_cache_size": 128, 00:35:44.129 "large_cache_size": 16, 00:35:44.129 "task_count": 2048, 00:35:44.129 "sequence_count": 2048, 00:35:44.129 "buf_count": 2048 00:35:44.129 } 00:35:44.129 } 00:35:44.129 ] 00:35:44.129 }, 00:35:44.129 { 00:35:44.129 "subsystem": "bdev", 00:35:44.129 "config": [ 00:35:44.129 { 00:35:44.129 "method": "bdev_set_options", 00:35:44.129 "params": { 00:35:44.129 "bdev_io_pool_size": 65535, 00:35:44.129 "bdev_io_cache_size": 256, 00:35:44.129 "bdev_auto_examine": true, 00:35:44.129 "iobuf_small_cache_size": 128, 00:35:44.129 "iobuf_large_cache_size": 16 00:35:44.129 } 00:35:44.129 }, 00:35:44.129 { 00:35:44.129 "method": "bdev_raid_set_options", 00:35:44.129 "params": { 00:35:44.129 "process_window_size_kb": 1024, 00:35:44.129 "process_max_bandwidth_mb_sec": 0 00:35:44.129 } 00:35:44.129 }, 00:35:44.129 { 00:35:44.129 "method": "bdev_iscsi_set_options", 00:35:44.129 "params": { 00:35:44.129 "timeout_sec": 30 00:35:44.129 } 00:35:44.129 }, 00:35:44.129 { 00:35:44.129 "method": "bdev_nvme_set_options", 00:35:44.129 "params": { 00:35:44.129 "action_on_timeout": "none", 00:35:44.129 "timeout_us": 0, 00:35:44.129 "timeout_admin_us": 0, 00:35:44.129 "keep_alive_timeout_ms": 10000, 00:35:44.129 "arbitration_burst": 0, 00:35:44.129 "low_priority_weight": 0, 00:35:44.129 "medium_priority_weight": 0, 00:35:44.129 "high_priority_weight": 0, 00:35:44.129 "nvme_adminq_poll_period_us": 10000, 00:35:44.129 "nvme_ioq_poll_period_us": 0, 00:35:44.129 "io_queue_requests": 512, 00:35:44.129 "delay_cmd_submit": true, 00:35:44.129 "transport_retry_count": 4, 00:35:44.129 "bdev_retry_count": 3, 00:35:44.129 "transport_ack_timeout": 0, 00:35:44.129 "ctrlr_loss_timeout_sec": 0, 00:35:44.129 "reconnect_delay_sec": 0, 00:35:44.129 "fast_io_fail_timeout_sec": 0, 00:35:44.129 "disable_auto_failback": false, 00:35:44.129 "generate_uuids": false, 00:35:44.129 "transport_tos": 0, 00:35:44.129 "nvme_error_stat": false, 00:35:44.129 "rdma_srq_size": 0, 00:35:44.129 "io_path_stat": false, 00:35:44.129 "allow_accel_sequence": false, 00:35:44.129 "rdma_max_cq_size": 0, 00:35:44.129 "rdma_cm_event_timeout_ms": 0, 00:35:44.129 "dhchap_digests": [ 00:35:44.129 "sha256", 00:35:44.129 "sha384", 00:35:44.129 "sha512" 00:35:44.129 ], 00:35:44.129 "dhchap_dhgroups": [ 00:35:44.129 "null", 00:35:44.129 "ffdhe2048", 00:35:44.129 "ffdhe3072", 00:35:44.129 "ffdhe4096", 00:35:44.129 "ffdhe6144", 00:35:44.129 "ffdhe8192" 00:35:44.129 ] 00:35:44.129 } 00:35:44.129 }, 00:35:44.129 { 00:35:44.129 "method": "bdev_nvme_attach_controller", 00:35:44.129 "params": { 00:35:44.129 "name": "nvme0", 00:35:44.129 "trtype": "TCP", 00:35:44.129 "adrfam": "IPv4", 00:35:44.129 "traddr": "127.0.0.1", 00:35:44.129 "trsvcid": "4420", 00:35:44.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:44.129 "prchk_reftag": false, 00:35:44.129 "prchk_guard": false, 00:35:44.129 "ctrlr_loss_timeout_sec": 0, 00:35:44.129 "reconnect_delay_sec": 0, 00:35:44.129 "fast_io_fail_timeout_sec": 0, 00:35:44.129 "psk": "key0", 00:35:44.129 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:44.129 "hdgst": false, 00:35:44.129 "ddgst": false, 00:35:44.129 "multipath": "multipath" 00:35:44.129 } 00:35:44.129 }, 00:35:44.129 { 00:35:44.129 "method": "bdev_nvme_set_hotplug", 00:35:44.129 "params": { 00:35:44.129 "period_us": 100000, 00:35:44.129 "enable": false 00:35:44.129 } 00:35:44.129 }, 00:35:44.129 { 00:35:44.129 "method": "bdev_wait_for_examine" 00:35:44.129 } 00:35:44.129 ] 00:35:44.129 }, 00:35:44.129 { 00:35:44.129 "subsystem": "nbd", 00:35:44.129 "config": [] 00:35:44.129 } 00:35:44.129 ] 00:35:44.129 }' 00:35:44.129 11:46:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.129 11:46:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:44.129 [2024-11-19 11:46:57.710860] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:35:44.129 [2024-11-19 11:46:57.710910] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543328 ] 00:35:44.129 [2024-11-19 11:46:57.787472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.129 [2024-11-19 11:46:57.825299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.389 [2024-11-19 11:46:57.986848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:44.957 11:46:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.957 11:46:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:44.957 11:46:58 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:44.957 11:46:58 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:44.957 11:46:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.216 11:46:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:45.216 11:46:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:45.216 11:46:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.216 11:46:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:45.216 11:46:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.216 11:46:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.216 11:46:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:45.216 11:46:58 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:45.216 11:46:58 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:45.216 11:46:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:45.216 11:46:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.216 11:46:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.216 11:46:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.216 11:46:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:45.477 11:46:59 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:45.477 11:46:59 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:45.477 11:46:59 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:45.477 11:46:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:45.746 11:46:59 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:45.746 11:46:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:45.746 11:46:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.QfNwU4KPMH /tmp/tmp.AXU6dLNwg5 00:35:45.746 11:46:59 keyring_file -- keyring/file.sh@20 -- # killprocess 2543328 00:35:45.746 11:46:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2543328 ']' 00:35:45.746 11:46:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2543328 00:35:45.746 11:46:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:45.746 11:46:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:45.746 11:46:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2543328 00:35:45.746 11:46:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:45.746 11:46:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:45.746 11:46:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2543328' 00:35:45.746 killing process with pid 2543328 00:35:45.746 11:46:59 keyring_file -- common/autotest_common.sh@973 -- # kill 2543328 00:35:45.746 Received shutdown signal, test time was about 1.000000 seconds 00:35:45.746 00:35:45.746 Latency(us) 00:35:45.746 [2024-11-19T10:46:59.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.746 [2024-11-19T10:46:59.527Z] =================================================================================================================== 00:35:45.746 [2024-11-19T10:46:59.527Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:45.746 11:46:59 keyring_file -- common/autotest_common.sh@978 -- # wait 2543328 00:35:46.005 11:46:59 keyring_file -- keyring/file.sh@21 -- # killprocess 2541781 00:35:46.005 11:46:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2541781 ']' 00:35:46.005 11:46:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2541781 00:35:46.005 11:46:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:46.005 11:46:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.005 11:46:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541781 00:35:46.005 11:46:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:46.005 11:46:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:46.005 11:46:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541781' 00:35:46.005 killing process with pid 2541781 00:35:46.005 11:46:59 keyring_file -- common/autotest_common.sh@973 -- # kill 2541781 00:35:46.005 11:46:59 keyring_file -- common/autotest_common.sh@978 -- # wait 2541781 00:35:46.265 00:35:46.265 real 0m11.917s 00:35:46.265 user 0m29.689s 00:35:46.265 sys 0m2.690s 00:35:46.265 11:46:59 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:46.265 11:46:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:46.265 ************************************ 00:35:46.265 END TEST keyring_file 00:35:46.265 ************************************ 00:35:46.265 11:46:59 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:46.265 11:46:59 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:46.265 11:46:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:46.265 11:46:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:46.265 11:46:59 -- common/autotest_common.sh@10 -- # set +x 00:35:46.265 ************************************ 00:35:46.265 START TEST keyring_linux 00:35:46.265 ************************************ 00:35:46.265 11:46:59 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:46.265 Joined session keyring: 61227542 00:35:46.526 * Looking for test storage... 00:35:46.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:46.526 11:47:00 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:46.526 11:47:00 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:46.526 11:47:00 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:46.526 11:47:00 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:46.526 11:47:00 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:46.527 11:47:00 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:46.527 11:47:00 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:46.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.527 --rc genhtml_branch_coverage=1 00:35:46.527 --rc genhtml_function_coverage=1 00:35:46.527 --rc genhtml_legend=1 00:35:46.527 --rc geninfo_all_blocks=1 00:35:46.527 --rc geninfo_unexecuted_blocks=1 00:35:46.527 00:35:46.527 ' 00:35:46.527 11:47:00 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:46.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.527 --rc genhtml_branch_coverage=1 00:35:46.527 --rc genhtml_function_coverage=1 00:35:46.527 --rc genhtml_legend=1 00:35:46.527 --rc geninfo_all_blocks=1 00:35:46.527 --rc geninfo_unexecuted_blocks=1 00:35:46.527 00:35:46.527 ' 00:35:46.527 11:47:00 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:46.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.527 --rc genhtml_branch_coverage=1 00:35:46.527 --rc genhtml_function_coverage=1 00:35:46.527 --rc genhtml_legend=1 00:35:46.527 --rc geninfo_all_blocks=1 00:35:46.527 --rc geninfo_unexecuted_blocks=1 00:35:46.527 00:35:46.527 ' 00:35:46.527 11:47:00 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:46.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.527 --rc genhtml_branch_coverage=1 00:35:46.527 --rc genhtml_function_coverage=1 00:35:46.527 --rc genhtml_legend=1 00:35:46.527 --rc geninfo_all_blocks=1 00:35:46.527 --rc geninfo_unexecuted_blocks=1 00:35:46.527 00:35:46.527 ' 00:35:46.527 11:47:00 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:46.527 11:47:00 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:46.527 11:47:00 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.527 11:47:00 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.527 11:47:00 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.527 11:47:00 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:46.527 11:47:00 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:46.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:46.527 11:47:00 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:46.527 11:47:00 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:46.527 11:47:00 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:46.527 11:47:00 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:46.527 11:47:00 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:46.527 11:47:00 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:46.527 11:47:00 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:46.527 /tmp/:spdk-test:key0 00:35:46.527 11:47:00 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:46.527 11:47:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:46.528 11:47:00 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:46.528 11:47:00 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:46.528 11:47:00 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:46.528 11:47:00 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:46.528 11:47:00 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:46.528 11:47:00 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:46.528 11:47:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:46.528 11:47:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:46.528 /tmp/:spdk-test:key1 00:35:46.528 11:47:00 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2543877 00:35:46.528 11:47:00 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:46.528 11:47:00 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2543877 00:35:46.528 11:47:00 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2543877 ']' 00:35:46.528 11:47:00 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.528 11:47:00 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.528 11:47:00 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.528 11:47:00 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.528 11:47:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:46.788 [2024-11-19 11:47:00.349055] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:35:46.788 [2024-11-19 11:47:00.349109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543877 ] 00:35:46.788 [2024-11-19 11:47:00.427196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.788 [2024-11-19 11:47:00.470821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.046 11:47:00 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.046 11:47:00 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:47.046 11:47:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:47.046 11:47:00 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.046 11:47:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:47.046 [2024-11-19 11:47:00.695317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.046 null0 00:35:47.046 [2024-11-19 11:47:00.727370] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:47.046 [2024-11-19 11:47:00.727719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:47.046 11:47:00 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.046 11:47:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:47.046 506127899 00:35:47.046 11:47:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:47.046 640710367 00:35:47.046 11:47:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2543962 00:35:47.046 11:47:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:47.046 11:47:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2543962 /var/tmp/bperf.sock 00:35:47.046 11:47:00 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2543962 ']' 00:35:47.046 11:47:00 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:47.046 11:47:00 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.047 11:47:00 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:47.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:47.047 11:47:00 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.047 11:47:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:47.047 [2024-11-19 11:47:00.798850] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:35:47.047 [2024-11-19 11:47:00.798893] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543962 ] 00:35:47.305 [2024-11-19 11:47:00.859009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.305 [2024-11-19 11:47:00.900233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.305 11:47:00 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.305 11:47:00 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:47.305 11:47:00 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:47.305 11:47:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:47.564 11:47:01 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:47.564 11:47:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:47.823 11:47:01 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:47.823 11:47:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:47.823 [2024-11-19 11:47:01.584360] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:48.082 nvme0n1 00:35:48.082 11:47:01 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:48.082 11:47:01 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:48.082 11:47:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:48.082 11:47:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:48.082 11:47:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:48.082 11:47:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.341 11:47:01 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:48.341 11:47:01 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:48.341 11:47:01 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:48.341 11:47:01 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:48.341 11:47:01 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:48.341 11:47:01 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:48.341 11:47:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.341 11:47:02 keyring_linux -- keyring/linux.sh@25 -- # sn=506127899 00:35:48.341 11:47:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:48.341 11:47:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:48.341 11:47:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 506127899 == \5\0\6\1\2\7\8\9\9 ]] 00:35:48.341 11:47:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 506127899 00:35:48.341 11:47:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:48.341 11:47:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:48.600 Running I/O for 1 seconds... 00:35:49.535 21372.00 IOPS, 83.48 MiB/s 00:35:49.535 Latency(us) 00:35:49.535 [2024-11-19T10:47:03.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.535 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:49.535 nvme0n1 : 1.01 21372.75 83.49 0.00 0.00 5969.18 5071.92 12081.42 00:35:49.535 [2024-11-19T10:47:03.316Z] =================================================================================================================== 00:35:49.535 [2024-11-19T10:47:03.316Z] Total : 21372.75 83.49 0.00 0.00 5969.18 5071.92 12081.42 00:35:49.535 { 00:35:49.535 "results": [ 00:35:49.535 { 00:35:49.535 "job": "nvme0n1", 00:35:49.535 "core_mask": "0x2", 00:35:49.535 "workload": "randread", 00:35:49.535 "status": "finished", 00:35:49.535 "queue_depth": 128, 00:35:49.535 "io_size": 4096, 00:35:49.535 "runtime": 1.005954, 00:35:49.535 "iops": 21372.74666634856, 00:35:49.535 "mibps": 83.48729166542407, 00:35:49.535 "io_failed": 0, 00:35:49.535 "io_timeout": 0, 00:35:49.535 "avg_latency_us": 5969.1792410920125, 00:35:49.535 "min_latency_us": 5071.91652173913, 00:35:49.535 "max_latency_us": 12081.419130434782 00:35:49.535 } 00:35:49.535 ], 00:35:49.535 "core_count": 1 00:35:49.535 } 00:35:49.535 11:47:03 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:49.535 11:47:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:49.793 11:47:03 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:49.793 11:47:03 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:49.793 11:47:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:49.793 11:47:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:49.793 11:47:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:49.793 11:47:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.051 11:47:03 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:50.051 11:47:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:50.051 11:47:03 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:50.051 11:47:03 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:50.051 11:47:03 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:50.051 11:47:03 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:50.051 11:47:03 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:50.051 11:47:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.051 11:47:03 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:50.051 11:47:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.051 11:47:03 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:50.051 11:47:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:50.052 [2024-11-19 11:47:03.776483] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:50.052 [2024-11-19 11:47:03.776697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0da70 (107): Transport endpoint is not connected 00:35:50.052 [2024-11-19 11:47:03.777691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0da70 (9): Bad file descriptor 00:35:50.052 [2024-11-19 11:47:03.778692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:50.052 [2024-11-19 11:47:03.778702] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:50.052 [2024-11-19 11:47:03.778709] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:50.052 [2024-11-19 11:47:03.778718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:50.052 request: 00:35:50.052 { 00:35:50.052 "name": "nvme0", 00:35:50.052 "trtype": "tcp", 00:35:50.052 "traddr": "127.0.0.1", 00:35:50.052 "adrfam": "ipv4", 00:35:50.052 "trsvcid": "4420", 00:35:50.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.052 "prchk_reftag": false, 00:35:50.052 "prchk_guard": false, 00:35:50.052 "hdgst": false, 00:35:50.052 "ddgst": false, 00:35:50.052 "psk": ":spdk-test:key1", 00:35:50.052 "allow_unrecognized_csi": false, 00:35:50.052 "method": "bdev_nvme_attach_controller", 00:35:50.052 "req_id": 1 00:35:50.052 } 00:35:50.052 Got JSON-RPC error response 00:35:50.052 response: 00:35:50.052 { 00:35:50.052 "code": -5, 00:35:50.052 "message": "Input/output error" 00:35:50.052 } 00:35:50.052 11:47:03 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:50.052 11:47:03 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:50.052 11:47:03 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:50.052 11:47:03 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@33 -- # sn=506127899 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 506127899 00:35:50.052 1 links removed 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@33 -- # sn=640710367 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 640710367 00:35:50.052 1 links removed 00:35:50.052 11:47:03 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2543962 00:35:50.052 11:47:03 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2543962 ']' 00:35:50.052 11:47:03 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2543962 00:35:50.052 11:47:03 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:50.052 11:47:03 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.052 11:47:03 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2543962 00:35:50.323 11:47:03 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:50.323 11:47:03 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:50.323 11:47:03 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2543962' 00:35:50.323 killing process with pid 2543962 00:35:50.323 11:47:03 keyring_linux -- common/autotest_common.sh@973 -- # kill 2543962 00:35:50.323 Received shutdown signal, test time was about 1.000000 seconds 00:35:50.323 00:35:50.323 Latency(us) 00:35:50.323 [2024-11-19T10:47:04.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.323 [2024-11-19T10:47:04.104Z] =================================================================================================================== 00:35:50.323 [2024-11-19T10:47:04.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:50.323 11:47:03 keyring_linux -- common/autotest_common.sh@978 -- # wait 2543962 00:35:50.323 11:47:04 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2543877 00:35:50.323 11:47:04 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2543877 ']' 00:35:50.323 11:47:04 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2543877 00:35:50.323 11:47:04 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:50.323 11:47:04 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.323 11:47:04 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2543877 00:35:50.323 11:47:04 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:50.323 11:47:04 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:50.323 11:47:04 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2543877' 00:35:50.323 killing process with pid 2543877 00:35:50.323 11:47:04 keyring_linux -- common/autotest_common.sh@973 -- # kill 2543877 00:35:50.323 11:47:04 keyring_linux -- common/autotest_common.sh@978 -- # wait 2543877 00:35:50.891 00:35:50.891 real 0m4.381s 00:35:50.891 user 0m8.237s 00:35:50.891 sys 0m1.480s 00:35:50.891 11:47:04 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:50.891 11:47:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:50.891 ************************************ 00:35:50.891 END TEST keyring_linux 00:35:50.891 ************************************ 00:35:50.891 11:47:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:50.891 11:47:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:50.891 11:47:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:50.891 11:47:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:50.891 11:47:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:50.891 11:47:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:50.891 11:47:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:50.891 11:47:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:50.891 11:47:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:50.891 11:47:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:50.891 11:47:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:50.891 11:47:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:50.891 11:47:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:50.891 11:47:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:50.891 11:47:04 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:50.891 11:47:04 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:50.891 11:47:04 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:50.891 11:47:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:50.891 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:35:50.891 11:47:04 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:50.891 11:47:04 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:50.891 11:47:04 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:50.891 11:47:04 -- common/autotest_common.sh@10 -- # set +x 00:35:56.165 INFO: APP EXITING 00:35:56.165 INFO: killing all VMs 00:35:56.165 INFO: killing vhost app 00:35:56.165 INFO: EXIT DONE 00:35:58.701 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:58.701 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:58.701 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:01.994 Cleaning 00:36:01.994 Removing: /var/run/dpdk/spdk0/config 00:36:01.994 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:01.994 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:01.994 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:01.994 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:01.994 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:01.994 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:01.994 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:01.994 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:01.994 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:01.994 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:01.994 Removing: /var/run/dpdk/spdk1/config 00:36:01.994 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:01.994 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:01.994 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:01.994 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:01.994 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:01.994 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:01.994 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:01.994 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:01.994 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:01.994 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:01.994 Removing: /var/run/dpdk/spdk2/config 00:36:01.994 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:01.994 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:01.994 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:01.994 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:01.994 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:01.994 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:01.994 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:01.994 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:01.994 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:01.994 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:01.994 Removing: /var/run/dpdk/spdk3/config 00:36:01.994 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:01.994 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:01.994 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:01.994 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:01.994 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:01.994 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:01.994 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:01.994 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:01.994 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:01.994 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:01.994 Removing: /var/run/dpdk/spdk4/config 00:36:01.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:01.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:01.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:01.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:01.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:01.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:01.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:01.994 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:01.994 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:01.994 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:01.994 Removing: /dev/shm/bdev_svc_trace.1 00:36:01.994 Removing: /dev/shm/nvmf_trace.0 00:36:01.994 Removing: /dev/shm/spdk_tgt_trace.pid2065588 00:36:01.994 Removing: /var/run/dpdk/spdk0 00:36:01.994 Removing: /var/run/dpdk/spdk1 00:36:01.994 Removing: /var/run/dpdk/spdk2 00:36:01.994 Removing: /var/run/dpdk/spdk3 00:36:01.994 Removing: /var/run/dpdk/spdk4 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2063443 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2064508 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2065588 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2066240 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2067186 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2067419 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2068392 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2068406 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2068758 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2070279 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2071554 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2071958 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2072156 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2072436 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2072732 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2072984 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2073238 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2073521 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2074259 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2077263 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2077520 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2077775 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2077781 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2078283 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2078381 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2078780 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2078913 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2079266 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2079271 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2079530 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2079625 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2080117 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2080365 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2080660 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2084594 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2088884 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2099015 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2099660 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2104238 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2104699 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2109193 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2115091 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2117841 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2128015 00:36:01.994 Removing: /var/run/dpdk/spdk_pid2137034 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2138665 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2139587 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2156982 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2161054 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2206814 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2212003 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2217759 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2224452 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2224479 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2225264 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2226091 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2227004 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2227562 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2227696 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2227926 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2227942 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2227966 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2228859 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2229770 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2230682 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2231156 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2231223 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2231574 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2232627 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2233618 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2241753 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2271058 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2275578 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2277199 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2279012 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2279244 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2279314 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2279566 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2280127 00:36:01.995 Removing: /var/run/dpdk/spdk_pid2282352 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2283120 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2283614 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2285722 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2286217 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2286927 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2291028 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2296595 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2296596 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2296598 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2300371 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2308724 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2312776 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2318808 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2320297 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2321627 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2322939 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2328112 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2332435 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2336341 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2343890 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2343899 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2348603 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2348836 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2349062 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2349443 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2349530 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2354009 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2354583 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2358921 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2361527 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2366848 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2372179 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2381473 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2388453 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2388461 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2407042 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2407521 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2408158 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2408680 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2409392 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2409894 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2410406 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2411055 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2415092 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2415331 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2421392 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2421591 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2427435 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2431667 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2441396 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2442068 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2446342 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2446595 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2450836 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2456472 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2459058 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2469015 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2478208 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2480027 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2480858 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2496854 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2500674 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2503543 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2511272 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2511317 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2516354 00:36:02.255 Removing: /var/run/dpdk/spdk_pid2518283 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2520596 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2521848 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2523817 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2524880 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2533626 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2534084 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2534701 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2537029 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2537493 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2537961 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2541781 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2541792 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2543328 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2543877 00:36:02.515 Removing: /var/run/dpdk/spdk_pid2543962 00:36:02.515 Clean 00:36:02.515 11:47:16 -- common/autotest_common.sh@1453 -- # return 0 00:36:02.515 11:47:16 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:02.515 11:47:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:02.515 11:47:16 -- common/autotest_common.sh@10 -- # set +x 00:36:02.515 11:47:16 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:02.515 11:47:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:02.515 11:47:16 -- common/autotest_common.sh@10 -- # set +x 00:36:02.515 11:47:16 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:02.515 11:47:16 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:02.515 11:47:16 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:02.515 11:47:16 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:02.515 11:47:16 -- spdk/autotest.sh@398 -- # hostname 00:36:02.515 11:47:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:02.774 geninfo: WARNING: invalid characters removed from testname! 00:36:24.716 11:47:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:26.622 11:47:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:28.528 11:47:42 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:30.433 11:47:43 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:32.337 11:47:45 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:34.241 11:47:47 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:36.153 11:47:49 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:36.153 11:47:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:36.153 11:47:49 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:36.153 11:47:49 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:36.153 11:47:49 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:36.153 11:47:49 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:36.153 + [[ -n 1986199 ]] 00:36:36.153 + sudo kill 1986199 00:36:36.162 [Pipeline] } 00:36:36.177 [Pipeline] // stage 00:36:36.183 [Pipeline] } 00:36:36.197 [Pipeline] // timeout 00:36:36.203 [Pipeline] } 00:36:36.217 [Pipeline] // catchError 00:36:36.223 [Pipeline] } 00:36:36.237 [Pipeline] // wrap 00:36:36.243 [Pipeline] } 00:36:36.256 [Pipeline] // catchError 00:36:36.266 [Pipeline] stage 00:36:36.268 [Pipeline] { (Epilogue) 00:36:36.282 [Pipeline] catchError 00:36:36.284 [Pipeline] { 00:36:36.297 [Pipeline] echo 00:36:36.299 Cleanup processes 00:36:36.304 [Pipeline] sh 00:36:36.668 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:36.668 2554593 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:36.700 [Pipeline] sh 00:36:36.985 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:36.985 ++ grep -v 'sudo pgrep' 00:36:36.985 ++ awk '{print $1}' 00:36:36.985 + sudo kill -9 00:36:36.985 + true 00:36:36.997 [Pipeline] sh 00:36:37.282 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:49.509 [Pipeline] sh 00:36:49.798 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:49.798 Artifacts sizes are good 00:36:49.814 [Pipeline] archiveArtifacts 00:36:49.821 Archiving artifacts 00:36:49.955 [Pipeline] sh 00:36:50.241 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:50.256 [Pipeline] cleanWs 00:36:50.266 [WS-CLEANUP] Deleting project workspace... 00:36:50.266 [WS-CLEANUP] Deferred wipeout is used... 00:36:50.273 [WS-CLEANUP] done 00:36:50.274 [Pipeline] } 00:36:50.292 [Pipeline] // catchError 00:36:50.304 [Pipeline] sh 00:36:50.589 + logger -p user.info -t JENKINS-CI 00:36:50.598 [Pipeline] } 00:36:50.612 [Pipeline] // stage 00:36:50.618 [Pipeline] } 00:36:50.632 [Pipeline] // node 00:36:50.637 [Pipeline] End of Pipeline 00:36:50.676 Finished: SUCCESS